00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2275 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3534 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.073 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.077 The recommended git tool is: git 00:00:00.077 using credential 00000000-0000-0000-0000-000000000002 00:00:00.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.113 Fetching changes from the remote Git repository 00:00:00.116 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.161 Using shallow fetch with depth 1 00:00:00.161 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.161 > git --version # timeout=10 00:00:00.200 > git --version # 'git version 2.39.2' 00:00:00.200 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.237 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.237 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.185 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.198 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.209 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:08.209 > git config core.sparsecheckout # timeout=10 00:00:08.220 > git read-tree -mu HEAD # timeout=10 00:00:08.236 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:08.258 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:08.258 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:08.370 [Pipeline] Start of Pipeline 00:00:08.380 [Pipeline] library 00:00:08.381 Loading library shm_lib@master 00:00:08.381 Library shm_lib@master is cached. Copying from home. 00:00:08.395 [Pipeline] node 00:00:08.408 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:08.410 [Pipeline] { 00:00:08.417 [Pipeline] catchError 00:00:08.418 [Pipeline] { 00:00:08.425 [Pipeline] wrap 00:00:08.430 [Pipeline] { 00:00:08.438 [Pipeline] stage 00:00:08.440 [Pipeline] { (Prologue) 00:00:08.456 [Pipeline] echo 00:00:08.458 Node: VM-host-SM9 00:00:08.464 [Pipeline] cleanWs 00:00:08.474 [WS-CLEANUP] Deleting project workspace... 00:00:08.474 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.480 [WS-CLEANUP] done 00:00:08.736 [Pipeline] setCustomBuildProperty 00:00:08.822 [Pipeline] httpRequest 00:00:09.446 [Pipeline] echo 00:00:09.447 Sorcerer 10.211.164.101 is alive 00:00:09.456 [Pipeline] retry 00:00:09.458 [Pipeline] { 00:00:09.472 [Pipeline] httpRequest 00:00:09.476 HttpMethod: GET 00:00:09.477 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.477 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.489 Response Code: HTTP/1.1 200 OK 00:00:09.489 Success: Status code 200 is in the accepted range: 200,404 00:00:09.490 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:19.422 [Pipeline] } 00:00:19.436 [Pipeline] // retry 00:00:19.443 [Pipeline] sh 00:00:19.719 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:19.734 [Pipeline] httpRequest 00:00:20.258 [Pipeline] echo 00:00:20.260 Sorcerer 10.211.164.101 is alive 00:00:20.269 [Pipeline] retry 00:00:20.271 [Pipeline] { 00:00:20.284 [Pipeline] httpRequest 00:00:20.288 HttpMethod: GET 00:00:20.289 URL: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:20.289 Sending request to url: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:20.298 Response Code: HTTP/1.1 200 OK 00:00:20.299 Success: Status code 200 is in the accepted range: 200,404 00:00:20.299 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:01:11.523 [Pipeline] } 00:01:11.542 [Pipeline] // retry 00:01:11.550 [Pipeline] sh 00:01:11.832 + tar --no-same-owner -xf spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:01:14.379 [Pipeline] sh 00:01:14.657 + git -C spdk log --oneline -n5 00:01:14.658 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:14.658 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:14.658 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:14.658 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:14.658 9469ea403 nvme/fio_plugin: add trim support 00:01:14.677 [Pipeline] writeFile 00:01:14.692 [Pipeline] sh 00:01:14.973 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:14.985 [Pipeline] sh 00:01:15.265 + cat autorun-spdk.conf 00:01:15.265 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.265 SPDK_TEST_NVMF=1 00:01:15.265 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.265 SPDK_TEST_URING=1 00:01:15.265 SPDK_TEST_VFIOUSER=1 00:01:15.265 SPDK_TEST_USDT=1 00:01:15.265 SPDK_RUN_UBSAN=1 00:01:15.265 NET_TYPE=virt 00:01:15.265 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.272 RUN_NIGHTLY=1 00:01:15.274 [Pipeline] } 00:01:15.288 [Pipeline] // stage 00:01:15.303 [Pipeline] stage 00:01:15.305 [Pipeline] { (Run VM) 00:01:15.319 [Pipeline] sh 00:01:15.599 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:15.599 + echo 'Start stage prepare_nvme.sh' 00:01:15.599 Start stage prepare_nvme.sh 00:01:15.599 + [[ -n 0 ]] 00:01:15.599 + disk_prefix=ex0 00:01:15.599 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:15.599 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:15.599 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:15.599 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.599 ++ SPDK_TEST_NVMF=1 00:01:15.599 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.599 ++ SPDK_TEST_URING=1 00:01:15.599 ++ SPDK_TEST_VFIOUSER=1 00:01:15.599 ++ SPDK_TEST_USDT=1 00:01:15.599 ++ SPDK_RUN_UBSAN=1 00:01:15.599 ++ NET_TYPE=virt 00:01:15.599 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.599 ++ RUN_NIGHTLY=1 00:01:15.599 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:15.599 + nvme_files=() 00:01:15.599 + declare -A nvme_files 00:01:15.599 + backend_dir=/var/lib/libvirt/images/backends 00:01:15.599 + nvme_files['nvme.img']=5G 00:01:15.599 + nvme_files['nvme-cmb.img']=5G 00:01:15.599 + nvme_files['nvme-multi0.img']=4G 00:01:15.599 + nvme_files['nvme-multi1.img']=4G 00:01:15.599 + nvme_files['nvme-multi2.img']=4G 00:01:15.599 + nvme_files['nvme-openstack.img']=8G 00:01:15.599 + nvme_files['nvme-zns.img']=5G 00:01:15.599 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:15.599 + (( SPDK_TEST_FTL == 1 )) 00:01:15.599 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:15.599 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:15.599 + for nvme in "${!nvme_files[@]}" 00:01:15.599 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:15.599 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.599 + for nvme in "${!nvme_files[@]}" 00:01:15.599 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:15.599 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.599 + for nvme in "${!nvme_files[@]}" 00:01:15.599 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:15.599 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:15.599 + for nvme in "${!nvme_files[@]}" 00:01:15.599 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:15.599 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.599 + for nvme in "${!nvme_files[@]}" 00:01:15.599 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:15.599 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.599 + for nvme in "${!nvme_files[@]}" 00:01:15.599 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:15.599 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.600 + for nvme in "${!nvme_files[@]}" 00:01:15.600 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:15.858 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.858 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:15.858 + echo 'End stage prepare_nvme.sh' 00:01:15.858 End stage prepare_nvme.sh 00:01:15.870 [Pipeline] sh 00:01:16.196 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:16.196 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:16.196 00:01:16.196 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:16.196 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:16.196 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:16.196 HELP=0 00:01:16.196 DRY_RUN=0 00:01:16.196 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:16.196 NVME_DISKS_TYPE=nvme,nvme, 00:01:16.196 NVME_AUTO_CREATE=0 00:01:16.196 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:16.196 NVME_CMB=,, 00:01:16.196 NVME_PMR=,, 00:01:16.196 NVME_ZNS=,, 00:01:16.196 NVME_MS=,, 00:01:16.196 NVME_FDP=,, 00:01:16.196 SPDK_VAGRANT_DISTRO=fedora39 00:01:16.196 SPDK_VAGRANT_VMCPU=10 00:01:16.196 SPDK_VAGRANT_VMRAM=12288 00:01:16.196 SPDK_VAGRANT_PROVIDER=libvirt 00:01:16.196 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:16.196 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:16.196 SPDK_OPENSTACK_NETWORK=0 00:01:16.196 VAGRANT_PACKAGE_BOX=0 00:01:16.196 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:16.196 FORCE_DISTRO=true 00:01:16.196 VAGRANT_BOX_VERSION= 00:01:16.196 EXTRA_VAGRANTFILES= 00:01:16.196 NIC_MODEL=e1000 00:01:16.196 00:01:16.196 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:16.196 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:18.729 Bringing machine 'default' up with 'libvirt' provider... 00:01:19.296 ==> default: Creating image (snapshot of base box volume). 00:01:19.296 ==> default: Creating domain with the following settings... 00:01:19.296 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728817500_fa57a6de3bd590254514 00:01:19.296 ==> default: -- Domain type: kvm 00:01:19.296 ==> default: -- Cpus: 10 00:01:19.296 ==> default: -- Feature: acpi 00:01:19.296 ==> default: -- Feature: apic 00:01:19.296 ==> default: -- Feature: pae 00:01:19.296 ==> default: -- Memory: 12288M 00:01:19.296 ==> default: -- Memory Backing: hugepages: 00:01:19.296 ==> default: -- Management MAC: 00:01:19.296 ==> default: -- Loader: 00:01:19.296 ==> default: -- Nvram: 00:01:19.296 ==> default: -- Base box: spdk/fedora39 00:01:19.296 ==> default: -- Storage pool: default 00:01:19.296 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728817500_fa57a6de3bd590254514.img (20G) 00:01:19.296 ==> default: -- Volume Cache: default 00:01:19.296 ==> default: -- Kernel: 00:01:19.296 ==> default: -- Initrd: 00:01:19.296 ==> default: -- Graphics Type: vnc 00:01:19.296 ==> default: -- Graphics Port: -1 00:01:19.296 ==> default: -- Graphics IP: 127.0.0.1 00:01:19.296 ==> default: -- Graphics Password: Not defined 00:01:19.296 ==> default: -- Video Type: cirrus 00:01:19.296 ==> default: -- Video VRAM: 9216 00:01:19.296 ==> default: -- Sound Type: 00:01:19.296 ==> default: -- Keymap: en-us 00:01:19.296 ==> default: -- TPM Path: 00:01:19.296 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:19.296 ==> default: -- Command line args: 00:01:19.296 ==> default: -> value=-device, 00:01:19.296 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:19.296 ==> default: -> value=-drive, 00:01:19.296 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:19.296 ==> default: -> value=-device, 00:01:19.296 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.296 ==> default: -> value=-device, 00:01:19.296 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:19.296 ==> default: -> value=-drive, 00:01:19.296 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:19.296 ==> default: -> value=-device, 00:01:19.296 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.296 ==> default: -> value=-drive, 00:01:19.296 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:19.296 ==> default: -> value=-device, 00:01:19.296 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.296 ==> default: -> value=-drive, 00:01:19.296 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:19.296 ==> default: -> value=-device, 00:01:19.296 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.296 ==> default: Creating shared folders metadata... 00:01:19.296 ==> default: Starting domain. 00:01:20.677 ==> default: Waiting for domain to get an IP address... 00:01:38.766 ==> default: Waiting for SSH to become available... 00:01:38.766 ==> default: Configuring and enabling network interfaces... 00:01:41.301 default: SSH address: 192.168.121.229:22 00:01:41.301 default: SSH username: vagrant 00:01:41.301 default: SSH auth method: private key 00:01:43.833 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:51.949 ==> default: Mounting SSHFS shared folder... 00:01:52.886 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:52.886 ==> default: Checking Mount.. 00:01:54.264 ==> default: Folder Successfully Mounted! 00:01:54.264 ==> default: Running provisioner: file... 00:01:54.832 default: ~/.gitconfig => .gitconfig 00:01:55.401 00:01:55.401 SUCCESS! 00:01:55.401 00:01:55.401 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:55.401 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:55.401 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:55.401 00:01:55.410 [Pipeline] } 00:01:55.425 [Pipeline] // stage 00:01:55.435 [Pipeline] dir 00:01:55.435 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:55.437 [Pipeline] { 00:01:55.450 [Pipeline] catchError 00:01:55.452 [Pipeline] { 00:01:55.464 [Pipeline] sh 00:01:55.745 + vagrant ssh-config --host vagrant 00:01:55.745 + sed -ne /^Host/,$p 00:01:55.745 + tee ssh_conf 00:01:59.060 Host vagrant 00:01:59.060 HostName 192.168.121.229 00:01:59.060 User vagrant 00:01:59.060 Port 22 00:01:59.060 UserKnownHostsFile /dev/null 00:01:59.060 StrictHostKeyChecking no 00:01:59.060 PasswordAuthentication no 00:01:59.060 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:59.060 IdentitiesOnly yes 00:01:59.060 LogLevel FATAL 00:01:59.060 ForwardAgent yes 00:01:59.060 ForwardX11 yes 00:01:59.060 00:01:59.074 [Pipeline] withEnv 00:01:59.076 [Pipeline] { 00:01:59.089 [Pipeline] sh 00:01:59.381 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:59.381 source /etc/os-release 00:01:59.381 [[ -e /image.version ]] && img=$(< /image.version) 00:01:59.381 # Minimal, systemd-like check. 00:01:59.381 if [[ -e /.dockerenv ]]; then 00:01:59.381 # Clear garbage from the node's name: 00:01:59.381 # agt-er_autotest_547-896 -> autotest_547-896 00:01:59.381 # $HOSTNAME is the actual container id 00:01:59.381 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:59.381 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:59.381 # We can assume this is a mount from a host where container is running, 00:01:59.381 # so fetch its hostname to easily identify the target swarm worker. 00:01:59.381 container="$(< /etc/hostname) ($agent)" 00:01:59.381 else 00:01:59.381 # Fallback 00:01:59.381 container=$agent 00:01:59.381 fi 00:01:59.381 fi 00:01:59.381 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:59.381 00:01:59.392 [Pipeline] } 00:01:59.408 [Pipeline] // withEnv 00:01:59.416 [Pipeline] setCustomBuildProperty 00:01:59.431 [Pipeline] stage 00:01:59.433 [Pipeline] { (Tests) 00:01:59.452 [Pipeline] sh 00:01:59.732 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:00.005 [Pipeline] sh 00:02:00.284 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:00.557 [Pipeline] timeout 00:02:00.558 Timeout set to expire in 1 hr 0 min 00:02:00.560 [Pipeline] { 00:02:00.573 [Pipeline] sh 00:02:00.854 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:01.421 HEAD is now at 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:02:01.434 [Pipeline] sh 00:02:01.716 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:01.987 [Pipeline] sh 00:02:02.267 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:02.539 [Pipeline] sh 00:02:02.815 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:03.074 ++ readlink -f spdk_repo 00:02:03.074 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:03.074 + [[ -n /home/vagrant/spdk_repo ]] 00:02:03.074 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:03.074 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:03.074 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:03.074 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:03.074 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:03.074 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:03.074 + cd /home/vagrant/spdk_repo 00:02:03.074 + source /etc/os-release 00:02:03.074 ++ NAME='Fedora Linux' 00:02:03.074 ++ VERSION='39 (Cloud Edition)' 00:02:03.074 ++ ID=fedora 00:02:03.074 ++ VERSION_ID=39 00:02:03.074 ++ VERSION_CODENAME= 00:02:03.074 ++ PLATFORM_ID=platform:f39 00:02:03.074 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:03.074 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:03.074 ++ LOGO=fedora-logo-icon 00:02:03.074 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:03.074 ++ HOME_URL=https://fedoraproject.org/ 00:02:03.074 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:03.074 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:03.075 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:03.075 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:03.075 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:03.075 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:03.075 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:03.075 ++ SUPPORT_END=2024-11-12 00:02:03.075 ++ VARIANT='Cloud Edition' 00:02:03.075 ++ VARIANT_ID=cloud 00:02:03.075 + uname -a 00:02:03.075 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:03.075 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:03.075 Hugepages 00:02:03.075 node hugesize free / total 00:02:03.075 node0 1048576kB 0 / 0 00:02:03.075 node0 2048kB 0 / 0 00:02:03.075 00:02:03.075 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:03.075 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:03.075 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:03.075 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:03.075 + rm -f /tmp/spdk-ld-path 00:02:03.075 + source autorun-spdk.conf 00:02:03.075 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.075 ++ SPDK_TEST_NVMF=1 00:02:03.075 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:03.075 ++ SPDK_TEST_URING=1 00:02:03.075 ++ SPDK_TEST_VFIOUSER=1 00:02:03.075 ++ SPDK_TEST_USDT=1 00:02:03.075 ++ SPDK_RUN_UBSAN=1 00:02:03.075 ++ NET_TYPE=virt 00:02:03.075 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.075 ++ RUN_NIGHTLY=1 00:02:03.075 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:03.075 + [[ -n '' ]] 00:02:03.075 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:03.334 + for M in /var/spdk/build-*-manifest.txt 00:02:03.334 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:03.334 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.334 + for M in /var/spdk/build-*-manifest.txt 00:02:03.334 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:03.334 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.334 + for M in /var/spdk/build-*-manifest.txt 00:02:03.334 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:03.334 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.334 ++ uname 00:02:03.334 + [[ Linux == \L\i\n\u\x ]] 00:02:03.334 + sudo dmesg -T 00:02:03.334 + sudo dmesg --clear 00:02:03.334 + dmesg_pid=5234 00:02:03.334 + [[ Fedora Linux == FreeBSD ]] 00:02:03.334 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.334 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.334 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:03.334 + [[ -x /usr/src/fio-static/fio ]] 00:02:03.334 + sudo dmesg -Tw 00:02:03.334 + export FIO_BIN=/usr/src/fio-static/fio 00:02:03.334 + FIO_BIN=/usr/src/fio-static/fio 00:02:03.334 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:03.334 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:03.334 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:03.334 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.334 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.334 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:03.334 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.334 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.334 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:03.334 Test configuration: 00:02:03.334 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.334 SPDK_TEST_NVMF=1 00:02:03.334 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:03.334 SPDK_TEST_URING=1 00:02:03.334 SPDK_TEST_VFIOUSER=1 00:02:03.334 SPDK_TEST_USDT=1 00:02:03.334 SPDK_RUN_UBSAN=1 00:02:03.334 NET_TYPE=virt 00:02:03.334 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.334 RUN_NIGHTLY=1 11:05:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:03.334 11:05:44 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:03.334 11:05:44 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:03.334 11:05:44 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:03.334 11:05:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.334 11:05:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.334 11:05:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.334 11:05:44 -- paths/export.sh@5 -- $ export PATH 00:02:03.334 11:05:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.334 11:05:44 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:03.334 11:05:44 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:03.334 11:05:44 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1728817544.XXXXXX 00:02:03.334 11:05:44 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1728817544.xksSt4 00:02:03.334 11:05:44 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:03.334 11:05:44 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:02:03.334 11:05:44 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:03.334 11:05:44 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:03.334 11:05:44 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:03.334 11:05:44 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:03.334 11:05:44 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:03.334 11:05:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.334 11:05:44 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:02:03.334 11:05:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:03.334 11:05:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:03.334 11:05:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:03.334 11:05:44 -- spdk/autobuild.sh@16 -- $ date -u 00:02:03.334 Sun Oct 13 11:05:44 AM UTC 2024 00:02:03.334 11:05:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:03.334 LTS-66-g726a04d70 00:02:03.334 11:05:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:03.334 11:05:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:03.334 11:05:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:03.334 11:05:44 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:03.334 11:05:44 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:03.334 11:05:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.334 ************************************ 00:02:03.334 START TEST ubsan 00:02:03.334 ************************************ 00:02:03.334 11:05:44 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:03.334 using ubsan 00:02:03.334 00:02:03.334 real 0m0.000s 00:02:03.334 user 0m0.000s 00:02:03.334 sys 0m0.000s 00:02:03.334 11:05:44 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:03.334 11:05:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.334 ************************************ 00:02:03.334 END TEST ubsan 00:02:03.335 ************************************ 00:02:03.594 11:05:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:03.594 11:05:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:03.594 11:05:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:03.594 11:05:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:03.594 11:05:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:03.594 11:05:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:03.594 11:05:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:03.594 11:05:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:03.594 11:05:44 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:02:03.852 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:03.852 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.111 Using 'verbs' RDMA provider 00:02:17.329 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:29.539 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:30.117 Creating mk/config.mk...done. 00:02:30.117 Creating mk/cc.flags.mk...done. 00:02:30.117 Type 'make' to build. 00:02:30.117 11:06:11 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:30.117 11:06:11 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:30.118 11:06:11 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:30.118 11:06:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.118 ************************************ 00:02:30.118 START TEST make 00:02:30.118 ************************************ 00:02:30.118 11:06:11 -- common/autotest_common.sh@1104 -- $ make -j10 00:02:30.376 make[1]: Nothing to be done for 'all'. 00:02:31.750 The Meson build system 00:02:31.750 Version: 1.5.0 00:02:31.750 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:31.751 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:31.751 Build type: native build 00:02:31.751 Project name: libvfio-user 00:02:31.751 Project version: 0.0.1 00:02:31.751 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:31.751 C linker for the host machine: cc ld.bfd 2.40-14 00:02:31.751 Host machine cpu family: x86_64 00:02:31.751 Host machine cpu: x86_64 00:02:31.751 Run-time dependency threads found: YES 00:02:31.751 Library dl found: YES 00:02:31.751 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:31.751 Run-time dependency json-c found: YES 0.17 00:02:31.751 Run-time dependency cmocka found: YES 1.1.7 00:02:31.751 Program pytest-3 found: NO 00:02:31.751 Program flake8 found: NO 00:02:31.751 Program misspell-fixer found: NO 00:02:31.751 Program restructuredtext-lint found: NO 00:02:31.751 Program valgrind found: YES (/usr/bin/valgrind) 00:02:31.751 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:31.751 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:31.751 Compiler for C supports arguments -Wwrite-strings: YES 00:02:31.751 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:31.751 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:31.751 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:31.751 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:31.751 Build targets in project: 8 00:02:31.751 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:31.751 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:31.751 00:02:31.751 libvfio-user 0.0.1 00:02:31.751 00:02:31.751 User defined options 00:02:31.751 buildtype : debug 00:02:31.751 default_library: shared 00:02:31.751 libdir : /usr/local/lib 00:02:31.751 00:02:31.751 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:32.009 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:32.267 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:32.267 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:32.267 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:32.267 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:32.267 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:32.267 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:32.267 [7/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:32.267 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:32.267 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:32.267 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:32.267 [11/37] Compiling C object samples/null.p/null.c.o 00:02:32.267 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:32.267 [13/37] Compiling C object samples/client.p/client.c.o 00:02:32.267 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:32.267 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:32.525 [16/37] Compiling C object samples/server.p/server.c.o 00:02:32.525 [17/37] Linking target samples/client 00:02:32.525 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:32.525 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:32.525 [20/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:32.525 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:32.525 [22/37] Linking target lib/libvfio-user.so.0.0.1 00:02:32.525 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:32.525 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:32.525 [25/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:32.525 [26/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:32.525 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:32.525 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:32.525 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:32.525 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:32.783 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:32.783 [32/37] Linking target samples/server 00:02:32.783 [33/37] Linking target test/unit_tests 00:02:32.783 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:32.783 [35/37] Linking target samples/lspci 00:02:32.783 [36/37] Linking target samples/null 00:02:32.783 [37/37] Linking target samples/gpio-pci-idio-16 00:02:32.783 INFO: autodetecting backend as ninja 00:02:32.783 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:32.783 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:33.349 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:33.349 ninja: no work to do. 00:02:41.475 The Meson build system 00:02:41.475 Version: 1.5.0 00:02:41.475 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:41.475 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:41.475 Build type: native build 00:02:41.475 Program cat found: YES (/usr/bin/cat) 00:02:41.475 Project name: DPDK 00:02:41.475 Project version: 23.11.0 00:02:41.475 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:41.475 C linker for the host machine: cc ld.bfd 2.40-14 00:02:41.475 Host machine cpu family: x86_64 00:02:41.475 Host machine cpu: x86_64 00:02:41.475 Message: ## Building in Developer Mode ## 00:02:41.475 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:41.475 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:41.475 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:41.475 Program python3 found: YES (/usr/bin/python3) 00:02:41.475 Program cat found: YES (/usr/bin/cat) 00:02:41.475 Compiler for C supports arguments -march=native: YES 00:02:41.475 Checking for size of "void *" : 8 00:02:41.475 Checking for size of "void *" : 8 (cached) 00:02:41.475 Library m found: YES 00:02:41.475 Library numa found: YES 00:02:41.475 Has header "numaif.h" : YES 00:02:41.475 Library fdt found: NO 00:02:41.475 Library execinfo found: NO 00:02:41.475 Has header "execinfo.h" : YES 00:02:41.475 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:41.475 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:41.475 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:41.475 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:41.475 Run-time dependency openssl found: YES 3.1.1 00:02:41.475 Run-time dependency libpcap found: YES 1.10.4 00:02:41.475 Has header "pcap.h" with dependency libpcap: YES 00:02:41.475 Compiler for C supports arguments -Wcast-qual: YES 00:02:41.475 Compiler for C supports arguments -Wdeprecated: YES 00:02:41.475 Compiler for C supports arguments -Wformat: YES 00:02:41.475 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:41.475 Compiler for C supports arguments -Wformat-security: NO 00:02:41.475 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:41.475 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:41.475 Compiler for C supports arguments -Wnested-externs: YES 00:02:41.475 Compiler for C supports arguments -Wold-style-definition: YES 00:02:41.475 Compiler for C supports arguments -Wpointer-arith: YES 00:02:41.475 Compiler for C supports arguments -Wsign-compare: YES 00:02:41.475 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:41.475 Compiler for C supports arguments -Wundef: YES 00:02:41.475 Compiler for C supports arguments -Wwrite-strings: YES 00:02:41.475 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:41.475 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:41.475 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:41.475 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:41.475 Program objdump found: YES (/usr/bin/objdump) 00:02:41.475 Compiler for C supports arguments -mavx512f: YES 00:02:41.475 Checking if "AVX512 checking" compiles: YES 00:02:41.475 Fetching value of define "__SSE4_2__" : 1 00:02:41.475 Fetching value of define "__AES__" : 1 00:02:41.475 Fetching value of define "__AVX__" : 1 00:02:41.475 Fetching value of define "__AVX2__" : 1 00:02:41.475 Fetching value of define "__AVX512BW__" : (undefined) 00:02:41.475 Fetching value of define "__AVX512CD__" : (undefined) 00:02:41.475 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:41.475 Fetching value of define "__AVX512F__" : (undefined) 00:02:41.475 Fetching value of define "__AVX512VL__" : (undefined) 00:02:41.475 Fetching value of define "__PCLMUL__" : 1 00:02:41.475 Fetching value of define "__RDRND__" : 1 00:02:41.475 Fetching value of define "__RDSEED__" : 1 00:02:41.475 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:41.475 Fetching value of define "__znver1__" : (undefined) 00:02:41.475 Fetching value of define "__znver2__" : (undefined) 00:02:41.475 Fetching value of define "__znver3__" : (undefined) 00:02:41.475 Fetching value of define "__znver4__" : (undefined) 00:02:41.475 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:41.475 Message: lib/log: Defining dependency "log" 00:02:41.475 Message: lib/kvargs: Defining dependency "kvargs" 00:02:41.475 Message: lib/telemetry: Defining dependency "telemetry" 00:02:41.475 Checking for function "getentropy" : NO 00:02:41.475 Message: lib/eal: Defining dependency "eal" 00:02:41.475 Message: lib/ring: Defining dependency "ring" 00:02:41.475 Message: lib/rcu: Defining dependency "rcu" 00:02:41.475 Message: lib/mempool: Defining dependency "mempool" 00:02:41.475 Message: lib/mbuf: Defining dependency "mbuf" 00:02:41.475 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:41.475 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:41.475 Compiler for C supports arguments -mpclmul: YES 00:02:41.475 Compiler for C supports arguments -maes: YES 00:02:41.475 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:41.475 Compiler for C supports arguments -mavx512bw: YES 00:02:41.475 Compiler for C supports arguments -mavx512dq: YES 00:02:41.475 Compiler for C supports arguments -mavx512vl: YES 00:02:41.475 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:41.475 Compiler for C supports arguments -mavx2: YES 00:02:41.475 Compiler for C supports arguments -mavx: YES 00:02:41.475 Message: lib/net: Defining dependency "net" 00:02:41.475 Message: lib/meter: Defining dependency "meter" 00:02:41.475 Message: lib/ethdev: Defining dependency "ethdev" 00:02:41.475 Message: lib/pci: Defining dependency "pci" 00:02:41.475 Message: lib/cmdline: Defining dependency "cmdline" 00:02:41.475 Message: lib/hash: Defining dependency "hash" 00:02:41.475 Message: lib/timer: Defining dependency "timer" 00:02:41.475 Message: lib/compressdev: Defining dependency "compressdev" 00:02:41.475 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:41.476 Message: lib/dmadev: Defining dependency "dmadev" 00:02:41.476 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:41.476 Message: lib/power: Defining dependency "power" 00:02:41.476 Message: lib/reorder: Defining dependency "reorder" 00:02:41.476 Message: lib/security: Defining dependency "security" 00:02:41.476 Has header "linux/userfaultfd.h" : YES 00:02:41.476 Has header "linux/vduse.h" : YES 00:02:41.476 Message: lib/vhost: Defining dependency "vhost" 00:02:41.476 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:41.476 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:41.476 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:41.476 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:41.476 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:41.476 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:41.476 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:41.476 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:41.476 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:41.476 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:41.476 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:41.476 Configuring doxy-api-html.conf using configuration 00:02:41.476 Configuring doxy-api-man.conf using configuration 00:02:41.476 Program mandb found: YES (/usr/bin/mandb) 00:02:41.476 Program sphinx-build found: NO 00:02:41.476 Configuring rte_build_config.h using configuration 00:02:41.476 Message: 00:02:41.476 ================= 00:02:41.476 Applications Enabled 00:02:41.476 ================= 00:02:41.476 00:02:41.476 apps: 00:02:41.476 00:02:41.476 00:02:41.476 Message: 00:02:41.476 ================= 00:02:41.476 Libraries Enabled 00:02:41.476 ================= 00:02:41.476 00:02:41.476 libs: 00:02:41.476 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:41.476 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:41.476 cryptodev, dmadev, power, reorder, security, vhost, 00:02:41.476 00:02:41.476 Message: 00:02:41.476 =============== 00:02:41.476 Drivers Enabled 00:02:41.476 =============== 00:02:41.476 00:02:41.476 common: 00:02:41.476 00:02:41.476 bus: 00:02:41.476 pci, vdev, 00:02:41.476 mempool: 00:02:41.476 ring, 00:02:41.476 dma: 00:02:41.476 00:02:41.476 net: 00:02:41.476 00:02:41.476 crypto: 00:02:41.476 00:02:41.476 compress: 00:02:41.476 00:02:41.476 vdpa: 00:02:41.476 00:02:41.476 00:02:41.476 Message: 00:02:41.476 ================= 00:02:41.476 Content Skipped 00:02:41.476 ================= 00:02:41.476 00:02:41.476 apps: 00:02:41.476 dumpcap: explicitly disabled via build config 00:02:41.476 graph: explicitly disabled via build config 00:02:41.476 pdump: explicitly disabled via build config 00:02:41.476 proc-info: explicitly disabled via build config 00:02:41.476 test-acl: explicitly disabled via build config 00:02:41.476 test-bbdev: explicitly disabled via build config 00:02:41.476 test-cmdline: explicitly disabled via build config 00:02:41.476 test-compress-perf: explicitly disabled via build config 00:02:41.476 test-crypto-perf: explicitly disabled via build config 00:02:41.476 test-dma-perf: explicitly disabled via build config 00:02:41.476 test-eventdev: explicitly disabled via build config 00:02:41.476 test-fib: explicitly disabled via build config 00:02:41.476 test-flow-perf: explicitly disabled via build config 00:02:41.476 test-gpudev: explicitly disabled via build config 00:02:41.476 test-mldev: explicitly disabled via build config 00:02:41.476 test-pipeline: explicitly disabled via build config 00:02:41.476 test-pmd: explicitly disabled via build config 00:02:41.476 test-regex: explicitly disabled via build config 00:02:41.476 test-sad: explicitly disabled via build config 00:02:41.476 test-security-perf: explicitly disabled via build config 00:02:41.476 00:02:41.476 libs: 00:02:41.476 metrics: explicitly disabled via build config 00:02:41.476 acl: explicitly disabled via build config 00:02:41.476 bbdev: explicitly disabled via build config 00:02:41.476 bitratestats: explicitly disabled via build config 00:02:41.476 bpf: explicitly disabled via build config 00:02:41.476 cfgfile: explicitly disabled via build config 00:02:41.476 distributor: explicitly disabled via build config 00:02:41.476 efd: explicitly disabled via build config 00:02:41.476 eventdev: explicitly disabled via build config 00:02:41.476 dispatcher: explicitly disabled via build config 00:02:41.476 gpudev: explicitly disabled via build config 00:02:41.476 gro: explicitly disabled via build config 00:02:41.476 gso: explicitly disabled via build config 00:02:41.476 ip_frag: explicitly disabled via build config 00:02:41.476 jobstats: explicitly disabled via build config 00:02:41.476 latencystats: explicitly disabled via build config 00:02:41.476 lpm: explicitly disabled via build config 00:02:41.476 member: explicitly disabled via build config 00:02:41.476 pcapng: explicitly disabled via build config 00:02:41.476 rawdev: explicitly disabled via build config 00:02:41.476 regexdev: explicitly disabled via build config 00:02:41.476 mldev: explicitly disabled via build config 00:02:41.476 rib: explicitly disabled via build config 00:02:41.476 sched: explicitly disabled via build config 00:02:41.476 stack: explicitly disabled via build config 00:02:41.476 ipsec: explicitly disabled via build config 00:02:41.476 pdcp: explicitly disabled via build config 00:02:41.476 fib: explicitly disabled via build config 00:02:41.476 port: explicitly disabled via build config 00:02:41.476 pdump: explicitly disabled via build config 00:02:41.476 table: explicitly disabled via build config 00:02:41.476 pipeline: explicitly disabled via build config 00:02:41.476 graph: explicitly disabled via build config 00:02:41.476 node: explicitly disabled via build config 00:02:41.476 00:02:41.476 drivers: 00:02:41.476 common/cpt: not in enabled drivers build config 00:02:41.476 common/dpaax: not in enabled drivers build config 00:02:41.476 common/iavf: not in enabled drivers build config 00:02:41.476 common/idpf: not in enabled drivers build config 00:02:41.476 common/mvep: not in enabled drivers build config 00:02:41.476 common/octeontx: not in enabled drivers build config 00:02:41.476 bus/auxiliary: not in enabled drivers build config 00:02:41.476 bus/cdx: not in enabled drivers build config 00:02:41.476 bus/dpaa: not in enabled drivers build config 00:02:41.476 bus/fslmc: not in enabled drivers build config 00:02:41.476 bus/ifpga: not in enabled drivers build config 00:02:41.476 bus/platform: not in enabled drivers build config 00:02:41.476 bus/vmbus: not in enabled drivers build config 00:02:41.476 common/cnxk: not in enabled drivers build config 00:02:41.476 common/mlx5: not in enabled drivers build config 00:02:41.476 common/nfp: not in enabled drivers build config 00:02:41.476 common/qat: not in enabled drivers build config 00:02:41.476 common/sfc_efx: not in enabled drivers build config 00:02:41.476 mempool/bucket: not in enabled drivers build config 00:02:41.476 mempool/cnxk: not in enabled drivers build config 00:02:41.476 mempool/dpaa: not in enabled drivers build config 00:02:41.476 mempool/dpaa2: not in enabled drivers build config 00:02:41.476 mempool/octeontx: not in enabled drivers build config 00:02:41.476 mempool/stack: not in enabled drivers build config 00:02:41.476 dma/cnxk: not in enabled drivers build config 00:02:41.476 dma/dpaa: not in enabled drivers build config 00:02:41.476 dma/dpaa2: not in enabled drivers build config 00:02:41.476 dma/hisilicon: not in enabled drivers build config 00:02:41.476 dma/idxd: not in enabled drivers build config 00:02:41.476 dma/ioat: not in enabled drivers build config 00:02:41.476 dma/skeleton: not in enabled drivers build config 00:02:41.476 net/af_packet: not in enabled drivers build config 00:02:41.476 net/af_xdp: not in enabled drivers build config 00:02:41.476 net/ark: not in enabled drivers build config 00:02:41.476 net/atlantic: not in enabled drivers build config 00:02:41.476 net/avp: not in enabled drivers build config 00:02:41.476 net/axgbe: not in enabled drivers build config 00:02:41.476 net/bnx2x: not in enabled drivers build config 00:02:41.476 net/bnxt: not in enabled drivers build config 00:02:41.476 net/bonding: not in enabled drivers build config 00:02:41.476 net/cnxk: not in enabled drivers build config 00:02:41.476 net/cpfl: not in enabled drivers build config 00:02:41.476 net/cxgbe: not in enabled drivers build config 00:02:41.476 net/dpaa: not in enabled drivers build config 00:02:41.476 net/dpaa2: not in enabled drivers build config 00:02:41.476 net/e1000: not in enabled drivers build config 00:02:41.476 net/ena: not in enabled drivers build config 00:02:41.476 net/enetc: not in enabled drivers build config 00:02:41.476 net/enetfec: not in enabled drivers build config 00:02:41.476 net/enic: not in enabled drivers build config 00:02:41.476 net/failsafe: not in enabled drivers build config 00:02:41.476 net/fm10k: not in enabled drivers build config 00:02:41.476 net/gve: not in enabled drivers build config 00:02:41.476 net/hinic: not in enabled drivers build config 00:02:41.476 net/hns3: not in enabled drivers build config 00:02:41.476 net/i40e: not in enabled drivers build config 00:02:41.476 net/iavf: not in enabled drivers build config 00:02:41.476 net/ice: not in enabled drivers build config 00:02:41.476 net/idpf: not in enabled drivers build config 00:02:41.476 net/igc: not in enabled drivers build config 00:02:41.476 net/ionic: not in enabled drivers build config 00:02:41.476 net/ipn3ke: not in enabled drivers build config 00:02:41.476 net/ixgbe: not in enabled drivers build config 00:02:41.476 net/mana: not in enabled drivers build config 00:02:41.476 net/memif: not in enabled drivers build config 00:02:41.476 net/mlx4: not in enabled drivers build config 00:02:41.476 net/mlx5: not in enabled drivers build config 00:02:41.476 net/mvneta: not in enabled drivers build config 00:02:41.476 net/mvpp2: not in enabled drivers build config 00:02:41.476 net/netvsc: not in enabled drivers build config 00:02:41.476 net/nfb: not in enabled drivers build config 00:02:41.476 net/nfp: not in enabled drivers build config 00:02:41.476 net/ngbe: not in enabled drivers build config 00:02:41.476 net/null: not in enabled drivers build config 00:02:41.476 net/octeontx: not in enabled drivers build config 00:02:41.476 net/octeon_ep: not in enabled drivers build config 00:02:41.476 net/pcap: not in enabled drivers build config 00:02:41.476 net/pfe: not in enabled drivers build config 00:02:41.476 net/qede: not in enabled drivers build config 00:02:41.476 net/ring: not in enabled drivers build config 00:02:41.476 net/sfc: not in enabled drivers build config 00:02:41.476 net/softnic: not in enabled drivers build config 00:02:41.476 net/tap: not in enabled drivers build config 00:02:41.476 net/thunderx: not in enabled drivers build config 00:02:41.476 net/txgbe: not in enabled drivers build config 00:02:41.477 net/vdev_netvsc: not in enabled drivers build config 00:02:41.477 net/vhost: not in enabled drivers build config 00:02:41.477 net/virtio: not in enabled drivers build config 00:02:41.477 net/vmxnet3: not in enabled drivers build config 00:02:41.477 raw/*: missing internal dependency, "rawdev" 00:02:41.477 crypto/armv8: not in enabled drivers build config 00:02:41.477 crypto/bcmfs: not in enabled drivers build config 00:02:41.477 crypto/caam_jr: not in enabled drivers build config 00:02:41.477 crypto/ccp: not in enabled drivers build config 00:02:41.477 crypto/cnxk: not in enabled drivers build config 00:02:41.477 crypto/dpaa_sec: not in enabled drivers build config 00:02:41.477 crypto/dpaa2_sec: not in enabled drivers build config 00:02:41.477 crypto/ipsec_mb: not in enabled drivers build config 00:02:41.477 crypto/mlx5: not in enabled drivers build config 00:02:41.477 crypto/mvsam: not in enabled drivers build config 00:02:41.477 crypto/nitrox: not in enabled drivers build config 00:02:41.477 crypto/null: not in enabled drivers build config 00:02:41.477 crypto/octeontx: not in enabled drivers build config 00:02:41.477 crypto/openssl: not in enabled drivers build config 00:02:41.477 crypto/scheduler: not in enabled drivers build config 00:02:41.477 crypto/uadk: not in enabled drivers build config 00:02:41.477 crypto/virtio: not in enabled drivers build config 00:02:41.477 compress/isal: not in enabled drivers build config 00:02:41.477 compress/mlx5: not in enabled drivers build config 00:02:41.477 compress/octeontx: not in enabled drivers build config 00:02:41.477 compress/zlib: not in enabled drivers build config 00:02:41.477 regex/*: missing internal dependency, "regexdev" 00:02:41.477 ml/*: missing internal dependency, "mldev" 00:02:41.477 vdpa/ifc: not in enabled drivers build config 00:02:41.477 vdpa/mlx5: not in enabled drivers build config 00:02:41.477 vdpa/nfp: not in enabled drivers build config 00:02:41.477 vdpa/sfc: not in enabled drivers build config 00:02:41.477 event/*: missing internal dependency, "eventdev" 00:02:41.477 baseband/*: missing internal dependency, "bbdev" 00:02:41.477 gpu/*: missing internal dependency, "gpudev" 00:02:41.477 00:02:41.477 00:02:41.477 Build targets in project: 85 00:02:41.477 00:02:41.477 DPDK 23.11.0 00:02:41.477 00:02:41.477 User defined options 00:02:41.477 buildtype : debug 00:02:41.477 default_library : shared 00:02:41.477 libdir : lib 00:02:41.477 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:41.477 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:41.477 c_link_args : 00:02:41.477 cpu_instruction_set: native 00:02:41.477 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:41.477 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:41.477 enable_docs : false 00:02:41.477 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:41.477 enable_kmods : false 00:02:41.477 tests : false 00:02:41.477 00:02:41.477 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:42.060 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:42.060 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:42.060 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:42.060 [3/265] Linking static target lib/librte_kvargs.a 00:02:42.060 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:42.060 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:42.060 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:42.060 [7/265] Linking static target lib/librte_log.a 00:02:42.060 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:42.060 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:42.060 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:42.626 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.627 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:42.885 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:42.885 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:42.885 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:42.885 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:42.885 [17/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.885 [18/265] Linking static target lib/librte_telemetry.a 00:02:43.144 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:43.144 [20/265] Linking target lib/librte_log.so.24.0 00:02:43.144 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:43.144 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:43.403 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:43.403 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:43.403 [25/265] Linking target lib/librte_kvargs.so.24.0 00:02:43.403 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:43.661 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:43.661 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:43.919 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:43.919 [30/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.919 [31/265] Linking target lib/librte_telemetry.so.24.0 00:02:43.919 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:43.919 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:44.178 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:44.178 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:44.178 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:44.178 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:44.178 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:44.178 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:44.436 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:44.436 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.436 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:44.436 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:44.436 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:44.694 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:44.694 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.953 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.953 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:45.211 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:45.211 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:45.211 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:45.211 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:45.469 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:45.469 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:45.469 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:45.469 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:45.469 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:45.727 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:45.727 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:45.727 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:45.727 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:45.986 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:46.244 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:46.244 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:46.244 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:46.502 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:46.502 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:46.502 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:46.502 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:46.761 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:46.761 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:46.761 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:46.761 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:46.761 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:46.761 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:46.761 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:46.761 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:47.020 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:47.278 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:47.278 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:47.278 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:47.536 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:47.536 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:47.795 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:47.795 [85/265] Linking static target lib/librte_eal.a 00:02:47.795 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:47.795 [87/265] Linking static target lib/librte_ring.a 00:02:47.795 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:48.053 [89/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:48.053 [90/265] Linking static target lib/librte_rcu.a 00:02:48.053 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:48.053 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:48.311 [93/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.569 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:48.569 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:48.569 [96/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:48.569 [97/265] Linking static target lib/librte_mempool.a 00:02:48.828 [98/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.828 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:48.828 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:48.828 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:48.828 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:48.828 [103/265] Linking static target lib/librte_mbuf.a 00:02:49.086 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:49.344 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:49.344 [106/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:49.344 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:49.602 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:49.602 [109/265] Linking static target lib/librte_net.a 00:02:49.865 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:49.865 [111/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.865 [112/265] Linking static target lib/librte_meter.a 00:02:49.865 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:49.865 [114/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.134 [115/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.134 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.134 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:50.134 [118/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.392 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:50.651 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:50.651 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:50.909 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:50.909 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:50.909 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:50.909 [125/265] Linking static target lib/librte_pci.a 00:02:51.167 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:51.167 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:51.167 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:51.426 [129/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.426 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:51.426 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.426 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:51.426 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:51.426 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:51.426 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:51.426 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:51.426 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:51.426 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:51.426 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:51.685 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:51.685 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:51.685 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.685 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:51.943 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:51.943 [145/265] Linking static target lib/librte_ethdev.a 00:02:51.943 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:52.202 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:52.202 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:52.202 [149/265] Linking static target lib/librte_timer.a 00:02:52.202 [150/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:52.202 [151/265] Linking static target lib/librte_cmdline.a 00:02:52.462 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:52.462 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:52.462 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:52.462 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.462 [156/265] Linking static target lib/librte_hash.a 00:02:52.721 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:52.721 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.721 [159/265] Linking static target lib/librte_compressdev.a 00:02:52.980 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.980 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:53.238 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:53.238 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:53.238 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:53.238 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:53.497 [166/265] Linking static target lib/librte_dmadev.a 00:02:53.497 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:53.756 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:53.756 [169/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.756 [170/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.756 [171/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:53.756 [172/265] Linking static target lib/librte_cryptodev.a 00:02:53.756 [173/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:54.015 [174/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.015 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:54.015 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.273 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:54.273 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:54.273 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:54.273 [180/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:54.531 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:54.531 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:54.789 [183/265] Linking static target lib/librte_power.a 00:02:54.789 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:54.789 [185/265] Linking static target lib/librte_reorder.a 00:02:55.046 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:55.047 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:55.047 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:55.350 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:55.350 [190/265] Linking static target lib/librte_security.a 00:02:55.350 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:55.350 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.915 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.915 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.915 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:56.173 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:56.173 [197/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.173 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:56.173 [199/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.430 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:56.430 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:56.688 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:56.688 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:56.688 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:56.688 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:56.688 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:56.688 [207/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:56.688 [208/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:56.688 [209/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:56.945 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:56.945 [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.945 [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.945 [213/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:56.945 [214/265] Linking static target drivers/librte_bus_vdev.a 00:02:56.945 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.945 [216/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.945 [217/265] Linking static target drivers/librte_bus_pci.a 00:02:57.203 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.203 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.203 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.461 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:57.461 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.461 [223/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.461 [224/265] Linking static target drivers/librte_mempool_ring.a 00:02:57.461 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.028 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:58.028 [227/265] Linking static target lib/librte_vhost.a 00:02:58.989 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.989 [229/265] Linking target lib/librte_eal.so.24.0 00:02:59.246 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:59.246 [231/265] Linking target lib/librte_timer.so.24.0 00:02:59.246 [232/265] Linking target lib/librte_dmadev.so.24.0 00:02:59.246 [233/265] Linking target lib/librte_ring.so.24.0 00:02:59.246 [234/265] Linking target lib/librte_pci.so.24.0 00:02:59.246 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:59.246 [236/265] Linking target lib/librte_meter.so.24.0 00:02:59.246 [237/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:59.246 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:59.504 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:59.504 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:59.504 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:59.504 [242/265] Linking target lib/librte_rcu.so.24.0 00:02:59.504 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:59.504 [244/265] Linking target lib/librte_mempool.so.24.0 00:02:59.504 [245/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.504 [246/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.504 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:59.504 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:59.763 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:59.763 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:59.763 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:59.763 [252/265] Linking target lib/librte_reorder.so.24.0 00:02:59.763 [253/265] Linking target lib/librte_compressdev.so.24.0 00:02:59.763 [254/265] Linking target lib/librte_net.so.24.0 00:02:59.763 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:03:00.021 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:00.021 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:00.021 [258/265] Linking target lib/librte_hash.so.24.0 00:03:00.021 [259/265] Linking target lib/librte_cmdline.so.24.0 00:03:00.021 [260/265] Linking target lib/librte_security.so.24.0 00:03:00.021 [261/265] Linking target lib/librte_ethdev.so.24.0 00:03:00.279 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:00.279 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:00.279 [264/265] Linking target lib/librte_power.so.24.0 00:03:00.279 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:00.279 INFO: autodetecting backend as ninja 00:03:00.279 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:01.653 CC lib/log/log.o 00:03:01.653 CC lib/log/log_flags.o 00:03:01.653 CC lib/log/log_deprecated.o 00:03:01.653 CC lib/ut_mock/mock.o 00:03:01.653 CC lib/ut/ut.o 00:03:01.653 LIB libspdk_ut_mock.a 00:03:01.653 LIB libspdk_log.a 00:03:01.653 SO libspdk_ut_mock.so.5.0 00:03:01.653 LIB libspdk_ut.a 00:03:01.653 SO libspdk_log.so.6.1 00:03:01.653 SO libspdk_ut.so.1.0 00:03:01.653 SYMLINK libspdk_ut_mock.so 00:03:01.653 SYMLINK libspdk_log.so 00:03:01.653 SYMLINK libspdk_ut.so 00:03:01.911 CC lib/util/base64.o 00:03:01.911 CC lib/util/bit_array.o 00:03:01.911 CC lib/util/cpuset.o 00:03:01.911 CC lib/util/crc16.o 00:03:01.911 CC lib/util/crc32.o 00:03:01.911 CC lib/util/crc32c.o 00:03:01.911 CXX lib/trace_parser/trace.o 00:03:01.911 CC lib/dma/dma.o 00:03:01.911 CC lib/ioat/ioat.o 00:03:01.911 CC lib/vfio_user/host/vfio_user_pci.o 00:03:01.911 CC lib/util/crc32_ieee.o 00:03:01.911 CC lib/util/crc64.o 00:03:01.911 CC lib/vfio_user/host/vfio_user.o 00:03:01.911 CC lib/util/dif.o 00:03:01.911 LIB libspdk_dma.a 00:03:01.911 SO libspdk_dma.so.3.0 00:03:02.169 CC lib/util/fd.o 00:03:02.169 SYMLINK libspdk_dma.so 00:03:02.169 CC lib/util/file.o 00:03:02.169 CC lib/util/hexlify.o 00:03:02.169 CC lib/util/iov.o 00:03:02.169 LIB libspdk_ioat.a 00:03:02.169 CC lib/util/math.o 00:03:02.169 SO libspdk_ioat.so.6.0 00:03:02.169 SYMLINK libspdk_ioat.so 00:03:02.169 CC lib/util/pipe.o 00:03:02.169 CC lib/util/strerror_tls.o 00:03:02.169 CC lib/util/string.o 00:03:02.169 LIB libspdk_vfio_user.a 00:03:02.169 CC lib/util/uuid.o 00:03:02.169 CC lib/util/fd_group.o 00:03:02.169 SO libspdk_vfio_user.so.4.0 00:03:02.169 CC lib/util/xor.o 00:03:02.428 SYMLINK libspdk_vfio_user.so 00:03:02.428 CC lib/util/zipf.o 00:03:02.428 LIB libspdk_util.a 00:03:02.687 SO libspdk_util.so.8.0 00:03:02.687 SYMLINK libspdk_util.so 00:03:02.946 LIB libspdk_trace_parser.a 00:03:02.946 CC lib/vmd/vmd.o 00:03:02.946 CC lib/vmd/led.o 00:03:02.946 CC lib/env_dpdk/env.o 00:03:02.946 CC lib/conf/conf.o 00:03:02.946 CC lib/env_dpdk/memory.o 00:03:02.946 CC lib/json/json_parse.o 00:03:02.946 CC lib/json/json_util.o 00:03:02.946 CC lib/rdma/common.o 00:03:02.946 SO libspdk_trace_parser.so.4.0 00:03:02.946 CC lib/idxd/idxd.o 00:03:02.946 SYMLINK libspdk_trace_parser.so 00:03:02.946 CC lib/idxd/idxd_user.o 00:03:02.946 CC lib/idxd/idxd_kernel.o 00:03:03.205 LIB libspdk_conf.a 00:03:03.205 CC lib/rdma/rdma_verbs.o 00:03:03.205 SO libspdk_conf.so.5.0 00:03:03.205 CC lib/json/json_write.o 00:03:03.205 CC lib/env_dpdk/pci.o 00:03:03.205 SYMLINK libspdk_conf.so 00:03:03.205 CC lib/env_dpdk/init.o 00:03:03.205 CC lib/env_dpdk/threads.o 00:03:03.205 CC lib/env_dpdk/pci_ioat.o 00:03:03.465 CC lib/env_dpdk/pci_virtio.o 00:03:03.465 LIB libspdk_rdma.a 00:03:03.465 SO libspdk_rdma.so.5.0 00:03:03.465 CC lib/env_dpdk/pci_vmd.o 00:03:03.465 LIB libspdk_idxd.a 00:03:03.465 LIB libspdk_json.a 00:03:03.465 SYMLINK libspdk_rdma.so 00:03:03.465 CC lib/env_dpdk/pci_idxd.o 00:03:03.465 SO libspdk_idxd.so.11.0 00:03:03.465 SO libspdk_json.so.5.1 00:03:03.465 CC lib/env_dpdk/pci_event.o 00:03:03.465 CC lib/env_dpdk/sigbus_handler.o 00:03:03.465 SYMLINK libspdk_idxd.so 00:03:03.465 LIB libspdk_vmd.a 00:03:03.465 CC lib/env_dpdk/pci_dpdk.o 00:03:03.465 SYMLINK libspdk_json.so 00:03:03.465 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:03.465 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:03.465 SO libspdk_vmd.so.5.0 00:03:03.723 SYMLINK libspdk_vmd.so 00:03:03.723 CC lib/jsonrpc/jsonrpc_server.o 00:03:03.723 CC lib/jsonrpc/jsonrpc_client.o 00:03:03.723 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:03.723 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:03.982 LIB libspdk_jsonrpc.a 00:03:03.982 SO libspdk_jsonrpc.so.5.1 00:03:04.240 SYMLINK libspdk_jsonrpc.so 00:03:04.240 CC lib/rpc/rpc.o 00:03:04.498 LIB libspdk_env_dpdk.a 00:03:04.498 SO libspdk_env_dpdk.so.13.0 00:03:04.498 LIB libspdk_rpc.a 00:03:04.498 SO libspdk_rpc.so.5.0 00:03:04.498 SYMLINK libspdk_rpc.so 00:03:04.498 SYMLINK libspdk_env_dpdk.so 00:03:04.757 CC lib/sock/sock_rpc.o 00:03:04.757 CC lib/sock/sock.o 00:03:04.757 CC lib/trace/trace.o 00:03:04.757 CC lib/trace/trace_flags.o 00:03:04.757 CC lib/trace/trace_rpc.o 00:03:04.757 CC lib/notify/notify_rpc.o 00:03:04.757 CC lib/notify/notify.o 00:03:05.016 LIB libspdk_notify.a 00:03:05.016 LIB libspdk_trace.a 00:03:05.016 SO libspdk_notify.so.5.0 00:03:05.016 SO libspdk_trace.so.9.0 00:03:05.016 SYMLINK libspdk_notify.so 00:03:05.016 SYMLINK libspdk_trace.so 00:03:05.274 LIB libspdk_sock.a 00:03:05.274 SO libspdk_sock.so.8.0 00:03:05.274 CC lib/thread/thread.o 00:03:05.274 CC lib/thread/iobuf.o 00:03:05.274 SYMLINK libspdk_sock.so 00:03:05.533 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:05.533 CC lib/nvme/nvme_fabric.o 00:03:05.533 CC lib/nvme/nvme_ctrlr.o 00:03:05.533 CC lib/nvme/nvme_ns_cmd.o 00:03:05.533 CC lib/nvme/nvme_pcie_common.o 00:03:05.533 CC lib/nvme/nvme_ns.o 00:03:05.533 CC lib/nvme/nvme_pcie.o 00:03:05.533 CC lib/nvme/nvme_qpair.o 00:03:05.533 CC lib/nvme/nvme.o 00:03:06.483 CC lib/nvme/nvme_quirks.o 00:03:06.483 CC lib/nvme/nvme_transport.o 00:03:06.483 CC lib/nvme/nvme_discovery.o 00:03:06.483 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:06.483 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:06.483 CC lib/nvme/nvme_tcp.o 00:03:06.483 CC lib/nvme/nvme_opal.o 00:03:06.483 CC lib/nvme/nvme_io_msg.o 00:03:06.744 LIB libspdk_thread.a 00:03:07.020 CC lib/nvme/nvme_poll_group.o 00:03:07.020 SO libspdk_thread.so.9.0 00:03:07.020 CC lib/nvme/nvme_zns.o 00:03:07.020 CC lib/nvme/nvme_cuse.o 00:03:07.020 SYMLINK libspdk_thread.so 00:03:07.020 CC lib/nvme/nvme_vfio_user.o 00:03:07.020 CC lib/nvme/nvme_rdma.o 00:03:07.278 CC lib/accel/accel.o 00:03:07.278 CC lib/accel/accel_rpc.o 00:03:07.278 CC lib/blob/blobstore.o 00:03:07.537 CC lib/accel/accel_sw.o 00:03:07.795 CC lib/init/json_config.o 00:03:07.795 CC lib/blob/request.o 00:03:07.795 CC lib/virtio/virtio.o 00:03:07.795 CC lib/vfu_tgt/tgt_endpoint.o 00:03:07.795 CC lib/vfu_tgt/tgt_rpc.o 00:03:07.795 CC lib/virtio/virtio_vhost_user.o 00:03:07.795 CC lib/init/subsystem.o 00:03:08.053 CC lib/virtio/virtio_vfio_user.o 00:03:08.053 CC lib/virtio/virtio_pci.o 00:03:08.053 CC lib/blob/zeroes.o 00:03:08.053 CC lib/init/subsystem_rpc.o 00:03:08.053 CC lib/init/rpc.o 00:03:08.053 LIB libspdk_vfu_tgt.a 00:03:08.053 SO libspdk_vfu_tgt.so.2.0 00:03:08.311 CC lib/blob/blob_bs_dev.o 00:03:08.311 SYMLINK libspdk_vfu_tgt.so 00:03:08.311 LIB libspdk_init.a 00:03:08.311 LIB libspdk_accel.a 00:03:08.311 SO libspdk_init.so.4.0 00:03:08.311 SO libspdk_accel.so.14.0 00:03:08.311 LIB libspdk_virtio.a 00:03:08.311 SYMLINK libspdk_accel.so 00:03:08.311 SYMLINK libspdk_init.so 00:03:08.311 SO libspdk_virtio.so.6.0 00:03:08.569 SYMLINK libspdk_virtio.so 00:03:08.569 LIB libspdk_nvme.a 00:03:08.569 CC lib/bdev/bdev.o 00:03:08.569 CC lib/bdev/bdev_rpc.o 00:03:08.569 CC lib/bdev/bdev_zone.o 00:03:08.569 CC lib/bdev/part.o 00:03:08.569 CC lib/bdev/scsi_nvme.o 00:03:08.569 CC lib/event/app.o 00:03:08.569 CC lib/event/reactor.o 00:03:08.569 CC lib/event/log_rpc.o 00:03:08.828 CC lib/event/app_rpc.o 00:03:08.828 CC lib/event/scheduler_static.o 00:03:08.828 SO libspdk_nvme.so.12.0 00:03:09.086 SYMLINK libspdk_nvme.so 00:03:09.086 LIB libspdk_event.a 00:03:09.086 SO libspdk_event.so.12.0 00:03:09.086 SYMLINK libspdk_event.so 00:03:10.461 LIB libspdk_blob.a 00:03:10.462 SO libspdk_blob.so.10.1 00:03:10.462 SYMLINK libspdk_blob.so 00:03:10.720 CC lib/blobfs/blobfs.o 00:03:10.720 CC lib/blobfs/tree.o 00:03:10.720 CC lib/lvol/lvol.o 00:03:11.286 LIB libspdk_bdev.a 00:03:11.545 SO libspdk_bdev.so.14.0 00:03:11.545 LIB libspdk_blobfs.a 00:03:11.545 SO libspdk_blobfs.so.9.0 00:03:11.545 SYMLINK libspdk_bdev.so 00:03:11.545 LIB libspdk_lvol.a 00:03:11.545 SO libspdk_lvol.so.9.1 00:03:11.545 SYMLINK libspdk_blobfs.so 00:03:11.803 CC lib/ublk/ublk.o 00:03:11.803 SYMLINK libspdk_lvol.so 00:03:11.803 CC lib/ublk/ublk_rpc.o 00:03:11.803 CC lib/ftl/ftl_core.o 00:03:11.803 CC lib/ftl/ftl_layout.o 00:03:11.803 CC lib/scsi/dev.o 00:03:11.803 CC lib/scsi/lun.o 00:03:11.803 CC lib/scsi/port.o 00:03:11.803 CC lib/ftl/ftl_init.o 00:03:11.803 CC lib/nvmf/ctrlr.o 00:03:11.803 CC lib/nbd/nbd.o 00:03:12.061 CC lib/nvmf/ctrlr_discovery.o 00:03:12.061 CC lib/nvmf/ctrlr_bdev.o 00:03:12.061 CC lib/scsi/scsi.o 00:03:12.061 CC lib/scsi/scsi_bdev.o 00:03:12.061 CC lib/ftl/ftl_debug.o 00:03:12.061 CC lib/ftl/ftl_io.o 00:03:12.061 CC lib/ftl/ftl_sb.o 00:03:12.061 CC lib/scsi/scsi_pr.o 00:03:12.320 CC lib/nbd/nbd_rpc.o 00:03:12.320 CC lib/scsi/scsi_rpc.o 00:03:12.320 CC lib/scsi/task.o 00:03:12.320 LIB libspdk_ublk.a 00:03:12.320 LIB libspdk_nbd.a 00:03:12.320 CC lib/ftl/ftl_l2p.o 00:03:12.320 SO libspdk_ublk.so.2.0 00:03:12.320 SO libspdk_nbd.so.6.0 00:03:12.578 CC lib/ftl/ftl_l2p_flat.o 00:03:12.578 CC lib/nvmf/subsystem.o 00:03:12.578 SYMLINK libspdk_nbd.so 00:03:12.578 SYMLINK libspdk_ublk.so 00:03:12.578 CC lib/ftl/ftl_nv_cache.o 00:03:12.578 CC lib/nvmf/nvmf.o 00:03:12.578 CC lib/nvmf/nvmf_rpc.o 00:03:12.578 CC lib/nvmf/transport.o 00:03:12.578 LIB libspdk_scsi.a 00:03:12.578 SO libspdk_scsi.so.8.0 00:03:12.578 CC lib/ftl/ftl_band.o 00:03:12.578 CC lib/ftl/ftl_band_ops.o 00:03:12.578 SYMLINK libspdk_scsi.so 00:03:12.578 CC lib/ftl/ftl_writer.o 00:03:12.836 CC lib/ftl/ftl_rq.o 00:03:12.836 CC lib/ftl/ftl_reloc.o 00:03:13.094 CC lib/nvmf/tcp.o 00:03:13.094 CC lib/ftl/ftl_l2p_cache.o 00:03:13.094 CC lib/nvmf/vfio_user.o 00:03:13.352 CC lib/ftl/ftl_p2l.o 00:03:13.352 CC lib/nvmf/rdma.o 00:03:13.352 CC lib/ftl/mngt/ftl_mngt.o 00:03:13.610 CC lib/iscsi/conn.o 00:03:13.610 CC lib/iscsi/init_grp.o 00:03:13.610 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:13.610 CC lib/vhost/vhost.o 00:03:13.610 CC lib/vhost/vhost_rpc.o 00:03:13.610 CC lib/vhost/vhost_scsi.o 00:03:13.868 CC lib/iscsi/iscsi.o 00:03:13.868 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:13.868 CC lib/vhost/vhost_blk.o 00:03:14.126 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:14.126 CC lib/iscsi/md5.o 00:03:14.126 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:14.384 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:14.384 CC lib/vhost/rte_vhost_user.o 00:03:14.384 CC lib/iscsi/param.o 00:03:14.642 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:14.642 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:14.642 CC lib/iscsi/portal_grp.o 00:03:14.642 CC lib/iscsi/tgt_node.o 00:03:14.943 CC lib/iscsi/iscsi_subsystem.o 00:03:14.943 CC lib/iscsi/iscsi_rpc.o 00:03:14.943 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:14.943 CC lib/iscsi/task.o 00:03:14.943 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:14.943 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:14.943 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:15.219 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:15.219 CC lib/ftl/utils/ftl_conf.o 00:03:15.219 CC lib/ftl/utils/ftl_md.o 00:03:15.219 CC lib/ftl/utils/ftl_mempool.o 00:03:15.219 CC lib/ftl/utils/ftl_bitmap.o 00:03:15.219 CC lib/ftl/utils/ftl_property.o 00:03:15.219 LIB libspdk_iscsi.a 00:03:15.219 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:15.478 SO libspdk_iscsi.so.7.0 00:03:15.478 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:15.478 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:15.478 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:15.478 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:15.478 LIB libspdk_vhost.a 00:03:15.478 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:15.478 SYMLINK libspdk_iscsi.so 00:03:15.478 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:15.478 LIB libspdk_nvmf.a 00:03:15.736 SO libspdk_vhost.so.7.1 00:03:15.736 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:15.736 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:15.736 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:15.736 CC lib/ftl/base/ftl_base_dev.o 00:03:15.736 CC lib/ftl/base/ftl_base_bdev.o 00:03:15.736 CC lib/ftl/ftl_trace.o 00:03:15.736 SYMLINK libspdk_vhost.so 00:03:15.736 SO libspdk_nvmf.so.17.0 00:03:15.994 SYMLINK libspdk_nvmf.so 00:03:15.994 LIB libspdk_ftl.a 00:03:16.252 SO libspdk_ftl.so.8.0 00:03:16.522 SYMLINK libspdk_ftl.so 00:03:16.785 CC module/vfu_device/vfu_virtio.o 00:03:16.785 CC module/env_dpdk/env_dpdk_rpc.o 00:03:16.785 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:16.785 CC module/accel/error/accel_error.o 00:03:16.785 CC module/scheduler/gscheduler/gscheduler.o 00:03:16.785 CC module/accel/dsa/accel_dsa.o 00:03:16.785 CC module/sock/posix/posix.o 00:03:16.785 CC module/blob/bdev/blob_bdev.o 00:03:16.785 CC module/accel/ioat/accel_ioat.o 00:03:16.785 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:16.785 LIB libspdk_env_dpdk_rpc.a 00:03:16.785 SO libspdk_env_dpdk_rpc.so.5.0 00:03:17.043 LIB libspdk_scheduler_gscheduler.a 00:03:17.043 LIB libspdk_scheduler_dpdk_governor.a 00:03:17.043 SYMLINK libspdk_env_dpdk_rpc.so 00:03:17.043 SO libspdk_scheduler_gscheduler.so.3.0 00:03:17.043 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:17.043 CC module/accel/error/accel_error_rpc.o 00:03:17.043 LIB libspdk_scheduler_dynamic.a 00:03:17.043 CC module/accel/ioat/accel_ioat_rpc.o 00:03:17.043 SO libspdk_scheduler_dynamic.so.3.0 00:03:17.043 SYMLINK libspdk_scheduler_gscheduler.so 00:03:17.043 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:17.043 CC module/vfu_device/vfu_virtio_blk.o 00:03:17.043 CC module/accel/dsa/accel_dsa_rpc.o 00:03:17.043 LIB libspdk_blob_bdev.a 00:03:17.043 CC module/accel/iaa/accel_iaa.o 00:03:17.043 SYMLINK libspdk_scheduler_dynamic.so 00:03:17.043 CC module/accel/iaa/accel_iaa_rpc.o 00:03:17.043 SO libspdk_blob_bdev.so.10.1 00:03:17.043 LIB libspdk_accel_error.a 00:03:17.043 LIB libspdk_accel_ioat.a 00:03:17.043 CC module/sock/uring/uring.o 00:03:17.043 SO libspdk_accel_error.so.1.0 00:03:17.043 SYMLINK libspdk_blob_bdev.so 00:03:17.043 SO libspdk_accel_ioat.so.5.0 00:03:17.301 CC module/vfu_device/vfu_virtio_scsi.o 00:03:17.301 LIB libspdk_accel_dsa.a 00:03:17.301 SYMLINK libspdk_accel_error.so 00:03:17.301 CC module/vfu_device/vfu_virtio_rpc.o 00:03:17.301 SYMLINK libspdk_accel_ioat.so 00:03:17.301 SO libspdk_accel_dsa.so.4.0 00:03:17.301 SYMLINK libspdk_accel_dsa.so 00:03:17.301 LIB libspdk_accel_iaa.a 00:03:17.301 SO libspdk_accel_iaa.so.2.0 00:03:17.301 SYMLINK libspdk_accel_iaa.so 00:03:17.301 CC module/bdev/delay/vbdev_delay.o 00:03:17.301 CC module/bdev/error/vbdev_error.o 00:03:17.301 CC module/blobfs/bdev/blobfs_bdev.o 00:03:17.301 CC module/bdev/error/vbdev_error_rpc.o 00:03:17.559 CC module/bdev/gpt/gpt.o 00:03:17.559 CC module/bdev/lvol/vbdev_lvol.o 00:03:17.559 CC module/bdev/malloc/bdev_malloc.o 00:03:17.559 LIB libspdk_sock_posix.a 00:03:17.559 LIB libspdk_vfu_device.a 00:03:17.559 SO libspdk_sock_posix.so.5.0 00:03:17.559 SO libspdk_vfu_device.so.2.0 00:03:17.559 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:17.559 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:17.559 CC module/bdev/gpt/vbdev_gpt.o 00:03:17.559 SYMLINK libspdk_vfu_device.so 00:03:17.559 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:17.559 SYMLINK libspdk_sock_posix.so 00:03:17.817 LIB libspdk_bdev_error.a 00:03:17.817 SO libspdk_bdev_error.so.5.0 00:03:17.817 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:17.817 CC module/bdev/null/bdev_null.o 00:03:17.817 SYMLINK libspdk_bdev_error.so 00:03:17.817 LIB libspdk_blobfs_bdev.a 00:03:17.817 SO libspdk_blobfs_bdev.so.5.0 00:03:17.817 LIB libspdk_sock_uring.a 00:03:17.817 LIB libspdk_bdev_delay.a 00:03:17.817 SO libspdk_sock_uring.so.4.0 00:03:17.817 SYMLINK libspdk_blobfs_bdev.so 00:03:17.817 LIB libspdk_bdev_malloc.a 00:03:17.817 SO libspdk_bdev_delay.so.5.0 00:03:18.075 CC module/bdev/passthru/vbdev_passthru.o 00:03:18.075 CC module/bdev/nvme/bdev_nvme.o 00:03:18.075 LIB libspdk_bdev_gpt.a 00:03:18.075 SO libspdk_bdev_malloc.so.5.0 00:03:18.075 SYMLINK libspdk_sock_uring.so 00:03:18.075 CC module/bdev/null/bdev_null_rpc.o 00:03:18.075 SO libspdk_bdev_gpt.so.5.0 00:03:18.075 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:18.075 SYMLINK libspdk_bdev_delay.so 00:03:18.075 SYMLINK libspdk_bdev_malloc.so 00:03:18.075 SYMLINK libspdk_bdev_gpt.so 00:03:18.075 CC module/bdev/raid/bdev_raid.o 00:03:18.075 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:18.075 CC module/bdev/split/vbdev_split.o 00:03:18.075 LIB libspdk_bdev_lvol.a 00:03:18.075 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:18.075 SO libspdk_bdev_lvol.so.5.0 00:03:18.075 CC module/bdev/uring/bdev_uring.o 00:03:18.075 LIB libspdk_bdev_null.a 00:03:18.075 CC module/bdev/uring/bdev_uring_rpc.o 00:03:18.333 SO libspdk_bdev_null.so.5.0 00:03:18.333 LIB libspdk_bdev_passthru.a 00:03:18.333 SYMLINK libspdk_bdev_lvol.so 00:03:18.333 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:18.333 SO libspdk_bdev_passthru.so.5.0 00:03:18.333 SYMLINK libspdk_bdev_null.so 00:03:18.333 CC module/bdev/raid/bdev_raid_rpc.o 00:03:18.333 SYMLINK libspdk_bdev_passthru.so 00:03:18.333 CC module/bdev/split/vbdev_split_rpc.o 00:03:18.333 CC module/bdev/raid/bdev_raid_sb.o 00:03:18.333 CC module/bdev/raid/raid0.o 00:03:18.591 CC module/bdev/aio/bdev_aio.o 00:03:18.591 LIB libspdk_bdev_split.a 00:03:18.591 LIB libspdk_bdev_zone_block.a 00:03:18.591 CC module/bdev/aio/bdev_aio_rpc.o 00:03:18.591 SO libspdk_bdev_split.so.5.0 00:03:18.591 SO libspdk_bdev_zone_block.so.5.0 00:03:18.591 LIB libspdk_bdev_uring.a 00:03:18.591 SO libspdk_bdev_uring.so.5.0 00:03:18.591 SYMLINK libspdk_bdev_split.so 00:03:18.591 CC module/bdev/raid/raid1.o 00:03:18.591 SYMLINK libspdk_bdev_zone_block.so 00:03:18.591 CC module/bdev/raid/concat.o 00:03:18.591 SYMLINK libspdk_bdev_uring.so 00:03:18.849 CC module/bdev/nvme/nvme_rpc.o 00:03:18.849 CC module/bdev/nvme/bdev_mdns_client.o 00:03:18.849 CC module/bdev/ftl/bdev_ftl.o 00:03:18.849 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:18.849 CC module/bdev/iscsi/bdev_iscsi.o 00:03:18.849 LIB libspdk_bdev_aio.a 00:03:18.849 SO libspdk_bdev_aio.so.5.0 00:03:18.849 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:18.849 CC module/bdev/nvme/vbdev_opal.o 00:03:18.849 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:18.849 SYMLINK libspdk_bdev_aio.so 00:03:18.849 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:19.107 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:19.107 LIB libspdk_bdev_raid.a 00:03:19.107 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:19.107 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:19.107 SO libspdk_bdev_raid.so.5.0 00:03:19.107 SYMLINK libspdk_bdev_raid.so 00:03:19.107 LIB libspdk_bdev_ftl.a 00:03:19.107 SO libspdk_bdev_ftl.so.5.0 00:03:19.107 LIB libspdk_bdev_iscsi.a 00:03:19.365 SO libspdk_bdev_iscsi.so.5.0 00:03:19.365 SYMLINK libspdk_bdev_ftl.so 00:03:19.365 SYMLINK libspdk_bdev_iscsi.so 00:03:19.365 LIB libspdk_bdev_virtio.a 00:03:19.365 SO libspdk_bdev_virtio.so.5.0 00:03:19.623 SYMLINK libspdk_bdev_virtio.so 00:03:20.557 LIB libspdk_bdev_nvme.a 00:03:20.557 SO libspdk_bdev_nvme.so.6.0 00:03:20.557 SYMLINK libspdk_bdev_nvme.so 00:03:21.121 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:21.121 CC module/event/subsystems/sock/sock.o 00:03:21.121 CC module/event/subsystems/vmd/vmd.o 00:03:21.121 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:21.121 CC module/event/subsystems/scheduler/scheduler.o 00:03:21.121 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:21.121 CC module/event/subsystems/iobuf/iobuf.o 00:03:21.121 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:21.121 LIB libspdk_event_vhost_blk.a 00:03:21.121 LIB libspdk_event_sock.a 00:03:21.121 LIB libspdk_event_vfu_tgt.a 00:03:21.121 LIB libspdk_event_scheduler.a 00:03:21.121 LIB libspdk_event_vmd.a 00:03:21.121 SO libspdk_event_vhost_blk.so.2.0 00:03:21.121 SO libspdk_event_sock.so.4.0 00:03:21.121 SO libspdk_event_vfu_tgt.so.2.0 00:03:21.121 SO libspdk_event_scheduler.so.3.0 00:03:21.121 SO libspdk_event_vmd.so.5.0 00:03:21.121 LIB libspdk_event_iobuf.a 00:03:21.121 SYMLINK libspdk_event_vhost_blk.so 00:03:21.121 SYMLINK libspdk_event_vfu_tgt.so 00:03:21.121 SYMLINK libspdk_event_scheduler.so 00:03:21.121 SYMLINK libspdk_event_sock.so 00:03:21.121 SO libspdk_event_iobuf.so.2.0 00:03:21.121 SYMLINK libspdk_event_vmd.so 00:03:21.392 SYMLINK libspdk_event_iobuf.so 00:03:21.392 CC module/event/subsystems/accel/accel.o 00:03:21.671 LIB libspdk_event_accel.a 00:03:21.671 SO libspdk_event_accel.so.5.0 00:03:21.671 SYMLINK libspdk_event_accel.so 00:03:21.929 CC module/event/subsystems/bdev/bdev.o 00:03:22.187 LIB libspdk_event_bdev.a 00:03:22.187 SO libspdk_event_bdev.so.5.0 00:03:22.187 SYMLINK libspdk_event_bdev.so 00:03:22.447 CC module/event/subsystems/nbd/nbd.o 00:03:22.447 CC module/event/subsystems/scsi/scsi.o 00:03:22.447 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:22.447 CC module/event/subsystems/ublk/ublk.o 00:03:22.447 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:22.447 LIB libspdk_event_ublk.a 00:03:22.447 LIB libspdk_event_nbd.a 00:03:22.447 LIB libspdk_event_scsi.a 00:03:22.447 SO libspdk_event_ublk.so.2.0 00:03:22.447 SO libspdk_event_nbd.so.5.0 00:03:22.705 SO libspdk_event_scsi.so.5.0 00:03:22.705 SYMLINK libspdk_event_nbd.so 00:03:22.705 SYMLINK libspdk_event_ublk.so 00:03:22.705 LIB libspdk_event_nvmf.a 00:03:22.705 SYMLINK libspdk_event_scsi.so 00:03:22.705 SO libspdk_event_nvmf.so.5.0 00:03:22.705 SYMLINK libspdk_event_nvmf.so 00:03:22.705 CC module/event/subsystems/iscsi/iscsi.o 00:03:22.705 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:22.963 LIB libspdk_event_vhost_scsi.a 00:03:22.963 LIB libspdk_event_iscsi.a 00:03:22.963 SO libspdk_event_vhost_scsi.so.2.0 00:03:22.963 SO libspdk_event_iscsi.so.5.0 00:03:22.963 SYMLINK libspdk_event_vhost_scsi.so 00:03:23.222 SYMLINK libspdk_event_iscsi.so 00:03:23.222 SO libspdk.so.5.0 00:03:23.222 SYMLINK libspdk.so 00:03:23.480 CC app/trace_record/trace_record.o 00:03:23.480 TEST_HEADER include/spdk/accel.h 00:03:23.480 CXX app/trace/trace.o 00:03:23.480 TEST_HEADER include/spdk/accel_module.h 00:03:23.480 TEST_HEADER include/spdk/assert.h 00:03:23.480 TEST_HEADER include/spdk/barrier.h 00:03:23.480 TEST_HEADER include/spdk/base64.h 00:03:23.480 TEST_HEADER include/spdk/bdev.h 00:03:23.480 TEST_HEADER include/spdk/bdev_module.h 00:03:23.480 TEST_HEADER include/spdk/bdev_zone.h 00:03:23.480 TEST_HEADER include/spdk/bit_array.h 00:03:23.480 TEST_HEADER include/spdk/bit_pool.h 00:03:23.480 TEST_HEADER include/spdk/blob_bdev.h 00:03:23.480 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:23.480 TEST_HEADER include/spdk/blobfs.h 00:03:23.480 TEST_HEADER include/spdk/blob.h 00:03:23.480 TEST_HEADER include/spdk/conf.h 00:03:23.480 TEST_HEADER include/spdk/config.h 00:03:23.480 TEST_HEADER include/spdk/cpuset.h 00:03:23.480 TEST_HEADER include/spdk/crc16.h 00:03:23.480 TEST_HEADER include/spdk/crc32.h 00:03:23.481 TEST_HEADER include/spdk/crc64.h 00:03:23.481 TEST_HEADER include/spdk/dif.h 00:03:23.481 TEST_HEADER include/spdk/dma.h 00:03:23.481 TEST_HEADER include/spdk/endian.h 00:03:23.481 TEST_HEADER include/spdk/env_dpdk.h 00:03:23.481 CC app/nvmf_tgt/nvmf_main.o 00:03:23.481 TEST_HEADER include/spdk/env.h 00:03:23.481 TEST_HEADER include/spdk/event.h 00:03:23.481 TEST_HEADER include/spdk/fd_group.h 00:03:23.481 CC examples/accel/perf/accel_perf.o 00:03:23.481 TEST_HEADER include/spdk/fd.h 00:03:23.481 TEST_HEADER include/spdk/file.h 00:03:23.481 TEST_HEADER include/spdk/ftl.h 00:03:23.481 TEST_HEADER include/spdk/gpt_spec.h 00:03:23.481 TEST_HEADER include/spdk/hexlify.h 00:03:23.481 TEST_HEADER include/spdk/histogram_data.h 00:03:23.481 TEST_HEADER include/spdk/idxd.h 00:03:23.481 TEST_HEADER include/spdk/idxd_spec.h 00:03:23.481 TEST_HEADER include/spdk/init.h 00:03:23.481 TEST_HEADER include/spdk/ioat.h 00:03:23.481 TEST_HEADER include/spdk/ioat_spec.h 00:03:23.481 TEST_HEADER include/spdk/iscsi_spec.h 00:03:23.481 TEST_HEADER include/spdk/json.h 00:03:23.481 TEST_HEADER include/spdk/jsonrpc.h 00:03:23.481 TEST_HEADER include/spdk/likely.h 00:03:23.481 CC test/dma/test_dma/test_dma.o 00:03:23.481 TEST_HEADER include/spdk/log.h 00:03:23.481 CC test/accel/dif/dif.o 00:03:23.481 TEST_HEADER include/spdk/lvol.h 00:03:23.481 TEST_HEADER include/spdk/memory.h 00:03:23.481 TEST_HEADER include/spdk/mmio.h 00:03:23.481 TEST_HEADER include/spdk/nbd.h 00:03:23.481 TEST_HEADER include/spdk/notify.h 00:03:23.481 CC test/bdev/bdevio/bdevio.o 00:03:23.481 TEST_HEADER include/spdk/nvme.h 00:03:23.481 TEST_HEADER include/spdk/nvme_intel.h 00:03:23.481 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:23.481 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:23.481 TEST_HEADER include/spdk/nvme_spec.h 00:03:23.481 CC test/blobfs/mkfs/mkfs.o 00:03:23.481 TEST_HEADER include/spdk/nvme_zns.h 00:03:23.481 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:23.481 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:23.481 TEST_HEADER include/spdk/nvmf.h 00:03:23.481 TEST_HEADER include/spdk/nvmf_spec.h 00:03:23.481 TEST_HEADER include/spdk/nvmf_transport.h 00:03:23.481 TEST_HEADER include/spdk/opal.h 00:03:23.481 TEST_HEADER include/spdk/opal_spec.h 00:03:23.481 TEST_HEADER include/spdk/pci_ids.h 00:03:23.481 TEST_HEADER include/spdk/pipe.h 00:03:23.481 TEST_HEADER include/spdk/queue.h 00:03:23.481 TEST_HEADER include/spdk/reduce.h 00:03:23.481 TEST_HEADER include/spdk/rpc.h 00:03:23.481 TEST_HEADER include/spdk/scheduler.h 00:03:23.481 TEST_HEADER include/spdk/scsi.h 00:03:23.481 TEST_HEADER include/spdk/scsi_spec.h 00:03:23.481 CC test/app/bdev_svc/bdev_svc.o 00:03:23.481 TEST_HEADER include/spdk/sock.h 00:03:23.481 TEST_HEADER include/spdk/stdinc.h 00:03:23.481 TEST_HEADER include/spdk/string.h 00:03:23.481 TEST_HEADER include/spdk/thread.h 00:03:23.481 TEST_HEADER include/spdk/trace.h 00:03:23.481 TEST_HEADER include/spdk/trace_parser.h 00:03:23.481 TEST_HEADER include/spdk/tree.h 00:03:23.481 TEST_HEADER include/spdk/ublk.h 00:03:23.481 TEST_HEADER include/spdk/util.h 00:03:23.481 TEST_HEADER include/spdk/uuid.h 00:03:23.481 TEST_HEADER include/spdk/version.h 00:03:23.481 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:23.481 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:23.481 TEST_HEADER include/spdk/vhost.h 00:03:23.739 TEST_HEADER include/spdk/vmd.h 00:03:23.739 TEST_HEADER include/spdk/xor.h 00:03:23.739 TEST_HEADER include/spdk/zipf.h 00:03:23.739 CXX test/cpp_headers/accel.o 00:03:23.739 LINK nvmf_tgt 00:03:23.739 LINK spdk_trace_record 00:03:23.739 LINK mkfs 00:03:23.739 LINK spdk_trace 00:03:23.739 LINK bdev_svc 00:03:23.739 CXX test/cpp_headers/accel_module.o 00:03:23.997 CXX test/cpp_headers/assert.o 00:03:23.997 LINK dif 00:03:23.997 CXX test/cpp_headers/barrier.o 00:03:23.997 LINK test_dma 00:03:23.997 LINK accel_perf 00:03:23.997 LINK bdevio 00:03:23.997 CC app/spdk_lspci/spdk_lspci.o 00:03:23.997 CXX test/cpp_headers/base64.o 00:03:23.997 CC app/iscsi_tgt/iscsi_tgt.o 00:03:23.997 CC test/app/histogram_perf/histogram_perf.o 00:03:24.256 CC app/spdk_tgt/spdk_tgt.o 00:03:24.256 CXX test/cpp_headers/bdev.o 00:03:24.256 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:24.256 CXX test/cpp_headers/bdev_module.o 00:03:24.256 LINK spdk_lspci 00:03:24.256 LINK histogram_perf 00:03:24.256 CC examples/bdev/hello_world/hello_bdev.o 00:03:24.256 LINK iscsi_tgt 00:03:24.256 CC test/env/mem_callbacks/mem_callbacks.o 00:03:24.256 LINK spdk_tgt 00:03:24.513 CC examples/bdev/bdevperf/bdevperf.o 00:03:24.513 CXX test/cpp_headers/bdev_zone.o 00:03:24.513 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:24.513 CXX test/cpp_headers/bit_array.o 00:03:24.513 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:24.513 LINK hello_bdev 00:03:24.513 CXX test/cpp_headers/bit_pool.o 00:03:24.513 LINK nvme_fuzz 00:03:24.514 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:24.772 CC app/spdk_nvme_perf/perf.o 00:03:24.772 CC app/spdk_nvme_identify/identify.o 00:03:24.772 CC app/spdk_nvme_discover/discovery_aer.o 00:03:24.772 CXX test/cpp_headers/blob_bdev.o 00:03:24.772 CXX test/cpp_headers/blobfs_bdev.o 00:03:24.772 CXX test/cpp_headers/blobfs.o 00:03:25.030 LINK mem_callbacks 00:03:25.030 LINK vhost_fuzz 00:03:25.030 LINK spdk_nvme_discover 00:03:25.030 CC test/event/event_perf/event_perf.o 00:03:25.030 CXX test/cpp_headers/blob.o 00:03:25.030 CC app/spdk_top/spdk_top.o 00:03:25.288 LINK bdevperf 00:03:25.288 CC test/env/vtophys/vtophys.o 00:03:25.288 LINK event_perf 00:03:25.288 CXX test/cpp_headers/conf.o 00:03:25.288 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:25.288 CC test/env/memory/memory_ut.o 00:03:25.288 LINK vtophys 00:03:25.546 CXX test/cpp_headers/config.o 00:03:25.546 LINK env_dpdk_post_init 00:03:25.546 CXX test/cpp_headers/cpuset.o 00:03:25.546 CC test/event/reactor/reactor.o 00:03:25.546 CXX test/cpp_headers/crc16.o 00:03:25.546 CC examples/blob/hello_world/hello_blob.o 00:03:25.546 LINK spdk_nvme_perf 00:03:25.546 LINK spdk_nvme_identify 00:03:25.546 CXX test/cpp_headers/crc32.o 00:03:25.546 LINK reactor 00:03:25.805 CC test/env/pci/pci_ut.o 00:03:25.805 CC test/app/jsoncat/jsoncat.o 00:03:25.805 CXX test/cpp_headers/crc64.o 00:03:25.805 LINK hello_blob 00:03:25.805 CC test/event/reactor_perf/reactor_perf.o 00:03:25.805 CC test/app/stub/stub.o 00:03:25.805 LINK jsoncat 00:03:26.063 CC examples/ioat/perf/perf.o 00:03:26.063 LINK spdk_top 00:03:26.063 CXX test/cpp_headers/dif.o 00:03:26.063 LINK reactor_perf 00:03:26.063 LINK stub 00:03:26.063 CXX test/cpp_headers/dma.o 00:03:26.063 LINK pci_ut 00:03:26.063 LINK iscsi_fuzz 00:03:26.321 LINK ioat_perf 00:03:26.321 CC examples/blob/cli/blobcli.o 00:03:26.321 CC test/event/app_repeat/app_repeat.o 00:03:26.321 CXX test/cpp_headers/endian.o 00:03:26.321 LINK memory_ut 00:03:26.321 CC app/vhost/vhost.o 00:03:26.321 CC app/spdk_dd/spdk_dd.o 00:03:26.321 CC test/lvol/esnap/esnap.o 00:03:26.321 LINK app_repeat 00:03:26.321 CC examples/ioat/verify/verify.o 00:03:26.321 CXX test/cpp_headers/env_dpdk.o 00:03:26.579 LINK vhost 00:03:26.579 CC test/nvme/aer/aer.o 00:03:26.579 CC test/rpc_client/rpc_client_test.o 00:03:26.579 CC examples/nvme/hello_world/hello_world.o 00:03:26.579 CXX test/cpp_headers/env.o 00:03:26.579 LINK verify 00:03:26.579 LINK blobcli 00:03:26.579 CC test/event/scheduler/scheduler.o 00:03:26.837 CXX test/cpp_headers/event.o 00:03:26.837 LINK spdk_dd 00:03:26.837 LINK rpc_client_test 00:03:26.837 LINK hello_world 00:03:26.837 LINK aer 00:03:26.837 CC examples/sock/hello_world/hello_sock.o 00:03:26.837 CXX test/cpp_headers/fd_group.o 00:03:26.837 LINK scheduler 00:03:26.837 CC app/fio/nvme/fio_plugin.o 00:03:27.095 CC app/fio/bdev/fio_plugin.o 00:03:27.095 CC examples/nvme/reconnect/reconnect.o 00:03:27.095 CC test/thread/poller_perf/poller_perf.o 00:03:27.096 CC test/nvme/reset/reset.o 00:03:27.096 CXX test/cpp_headers/fd.o 00:03:27.096 CC examples/vmd/lsvmd/lsvmd.o 00:03:27.096 LINK hello_sock 00:03:27.096 LINK poller_perf 00:03:27.354 CC examples/vmd/led/led.o 00:03:27.354 LINK lsvmd 00:03:27.354 CXX test/cpp_headers/file.o 00:03:27.354 LINK reset 00:03:27.354 LINK led 00:03:27.354 LINK reconnect 00:03:27.354 CXX test/cpp_headers/ftl.o 00:03:27.354 CC examples/util/zipf/zipf.o 00:03:27.612 CC examples/nvmf/nvmf/nvmf.o 00:03:27.612 LINK spdk_nvme 00:03:27.612 LINK spdk_bdev 00:03:27.612 CC examples/thread/thread/thread_ex.o 00:03:27.612 CXX test/cpp_headers/gpt_spec.o 00:03:27.612 CC test/nvme/sgl/sgl.o 00:03:27.612 LINK zipf 00:03:27.612 CXX test/cpp_headers/hexlify.o 00:03:27.612 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:27.612 CC test/nvme/e2edp/nvme_dp.o 00:03:27.871 CC examples/idxd/perf/perf.o 00:03:27.871 CXX test/cpp_headers/histogram_data.o 00:03:27.871 LINK nvmf 00:03:27.871 CC examples/nvme/arbitration/arbitration.o 00:03:27.871 LINK thread 00:03:27.871 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:27.871 LINK sgl 00:03:27.871 LINK nvme_dp 00:03:27.871 CXX test/cpp_headers/idxd.o 00:03:28.129 CXX test/cpp_headers/idxd_spec.o 00:03:28.129 LINK interrupt_tgt 00:03:28.129 CC test/nvme/overhead/overhead.o 00:03:28.129 LINK idxd_perf 00:03:28.129 CC test/nvme/err_injection/err_injection.o 00:03:28.129 LINK nvme_manage 00:03:28.129 CC test/nvme/startup/startup.o 00:03:28.129 CXX test/cpp_headers/init.o 00:03:28.129 LINK arbitration 00:03:28.387 CC test/nvme/reserve/reserve.o 00:03:28.387 CC test/nvme/simple_copy/simple_copy.o 00:03:28.387 LINK err_injection 00:03:28.387 CXX test/cpp_headers/ioat.o 00:03:28.387 CC examples/nvme/hotplug/hotplug.o 00:03:28.387 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:28.387 LINK startup 00:03:28.387 LINK overhead 00:03:28.387 CC examples/nvme/abort/abort.o 00:03:28.387 LINK reserve 00:03:28.646 CXX test/cpp_headers/ioat_spec.o 00:03:28.646 CXX test/cpp_headers/iscsi_spec.o 00:03:28.646 LINK simple_copy 00:03:28.646 LINK cmb_copy 00:03:28.646 LINK hotplug 00:03:28.646 CC test/nvme/connect_stress/connect_stress.o 00:03:28.646 CC test/nvme/boot_partition/boot_partition.o 00:03:28.646 CXX test/cpp_headers/json.o 00:03:28.646 CC test/nvme/compliance/nvme_compliance.o 00:03:28.903 CXX test/cpp_headers/jsonrpc.o 00:03:28.903 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:28.903 CC test/nvme/fused_ordering/fused_ordering.o 00:03:28.903 CXX test/cpp_headers/likely.o 00:03:28.903 LINK abort 00:03:28.903 LINK boot_partition 00:03:28.903 LINK connect_stress 00:03:28.903 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:29.162 CC test/nvme/fdp/fdp.o 00:03:29.162 CXX test/cpp_headers/log.o 00:03:29.162 LINK pmr_persistence 00:03:29.162 LINK fused_ordering 00:03:29.162 CXX test/cpp_headers/lvol.o 00:03:29.162 LINK nvme_compliance 00:03:29.162 CXX test/cpp_headers/memory.o 00:03:29.162 CC test/nvme/cuse/cuse.o 00:03:29.162 LINK doorbell_aers 00:03:29.162 CXX test/cpp_headers/mmio.o 00:03:29.162 CXX test/cpp_headers/nbd.o 00:03:29.162 CXX test/cpp_headers/notify.o 00:03:29.162 CXX test/cpp_headers/nvme.o 00:03:29.162 CXX test/cpp_headers/nvme_intel.o 00:03:29.162 CXX test/cpp_headers/nvme_ocssd.o 00:03:29.419 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:29.419 LINK fdp 00:03:29.419 CXX test/cpp_headers/nvme_spec.o 00:03:29.419 CXX test/cpp_headers/nvme_zns.o 00:03:29.419 CXX test/cpp_headers/nvmf_cmd.o 00:03:29.419 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:29.419 CXX test/cpp_headers/nvmf.o 00:03:29.419 CXX test/cpp_headers/nvmf_transport.o 00:03:29.419 CXX test/cpp_headers/nvmf_spec.o 00:03:29.419 CXX test/cpp_headers/opal.o 00:03:29.676 CXX test/cpp_headers/opal_spec.o 00:03:29.676 CXX test/cpp_headers/pci_ids.o 00:03:29.676 CXX test/cpp_headers/pipe.o 00:03:29.676 CXX test/cpp_headers/queue.o 00:03:29.676 CXX test/cpp_headers/reduce.o 00:03:29.676 CXX test/cpp_headers/rpc.o 00:03:29.676 CXX test/cpp_headers/scheduler.o 00:03:29.676 CXX test/cpp_headers/scsi.o 00:03:29.676 CXX test/cpp_headers/scsi_spec.o 00:03:29.676 CXX test/cpp_headers/sock.o 00:03:29.934 CXX test/cpp_headers/stdinc.o 00:03:29.934 CXX test/cpp_headers/string.o 00:03:29.934 CXX test/cpp_headers/thread.o 00:03:29.934 CXX test/cpp_headers/trace.o 00:03:29.934 CXX test/cpp_headers/trace_parser.o 00:03:29.934 CXX test/cpp_headers/tree.o 00:03:29.934 CXX test/cpp_headers/ublk.o 00:03:29.934 CXX test/cpp_headers/util.o 00:03:29.934 CXX test/cpp_headers/uuid.o 00:03:29.934 CXX test/cpp_headers/version.o 00:03:29.934 CXX test/cpp_headers/vfio_user_pci.o 00:03:29.934 CXX test/cpp_headers/vfio_user_spec.o 00:03:29.934 CXX test/cpp_headers/vhost.o 00:03:29.934 CXX test/cpp_headers/vmd.o 00:03:30.192 CXX test/cpp_headers/xor.o 00:03:30.192 CXX test/cpp_headers/zipf.o 00:03:30.192 LINK cuse 00:03:31.126 LINK esnap 00:03:31.693 00:03:31.693 real 1m1.564s 00:03:31.693 user 6m35.237s 00:03:31.693 sys 1m24.086s 00:03:31.693 11:07:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:31.693 11:07:13 -- common/autotest_common.sh@10 -- $ set +x 00:03:31.693 ************************************ 00:03:31.693 END TEST make 00:03:31.693 ************************************ 00:03:31.693 11:07:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:31.693 11:07:13 -- nvmf/common.sh@7 -- # uname -s 00:03:31.693 11:07:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:31.693 11:07:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:31.693 11:07:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:31.693 11:07:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:31.693 11:07:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:31.693 11:07:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:31.693 11:07:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:31.693 11:07:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:31.693 11:07:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:31.693 11:07:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:31.693 11:07:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:03:31.693 11:07:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:03:31.693 11:07:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:31.693 11:07:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:31.693 11:07:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:31.693 11:07:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:31.693 11:07:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:31.693 11:07:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:31.693 11:07:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:31.693 11:07:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.693 11:07:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.693 11:07:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.693 11:07:13 -- paths/export.sh@5 -- # export PATH 00:03:31.693 11:07:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.693 11:07:13 -- nvmf/common.sh@46 -- # : 0 00:03:31.693 11:07:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:31.693 11:07:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:31.693 11:07:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:31.693 11:07:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:31.693 11:07:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:31.693 11:07:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:31.693 11:07:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:31.693 11:07:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:31.693 11:07:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:31.693 11:07:13 -- spdk/autotest.sh@32 -- # uname -s 00:03:31.693 11:07:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:31.693 11:07:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:31.693 11:07:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:31.693 11:07:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:31.693 11:07:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:31.693 11:07:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:31.693 11:07:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:31.693 11:07:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:31.693 11:07:13 -- spdk/autotest.sh@48 -- # udevadm_pid=48023 00:03:31.693 11:07:13 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:31.693 11:07:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:31.693 11:07:13 -- spdk/autotest.sh@54 -- # echo 48032 00:03:31.693 11:07:13 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:31.693 11:07:13 -- spdk/autotest.sh@56 -- # echo 48034 00:03:31.693 11:07:13 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:31.693 11:07:13 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:31.693 11:07:13 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:31.693 11:07:13 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:31.693 11:07:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:31.693 11:07:13 -- common/autotest_common.sh@10 -- # set +x 00:03:31.693 11:07:13 -- spdk/autotest.sh@70 -- # create_test_list 00:03:31.693 11:07:13 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:31.693 11:07:13 -- common/autotest_common.sh@10 -- # set +x 00:03:31.952 11:07:13 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:31.952 11:07:13 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:31.952 11:07:13 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:31.952 11:07:13 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:31.952 11:07:13 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:31.952 11:07:13 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:31.952 11:07:13 -- common/autotest_common.sh@1440 -- # uname 00:03:31.952 11:07:13 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:31.952 11:07:13 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:31.952 11:07:13 -- common/autotest_common.sh@1460 -- # uname 00:03:31.952 11:07:13 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:31.952 11:07:13 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:31.952 11:07:13 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:31.952 11:07:13 -- spdk/autotest.sh@83 -- # hash lcov 00:03:31.952 11:07:13 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:31.952 11:07:13 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:31.952 --rc lcov_branch_coverage=1 00:03:31.952 --rc lcov_function_coverage=1 00:03:31.952 --rc genhtml_branch_coverage=1 00:03:31.952 --rc genhtml_function_coverage=1 00:03:31.952 --rc genhtml_legend=1 00:03:31.952 --rc geninfo_all_blocks=1 00:03:31.952 ' 00:03:31.952 11:07:13 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:31.952 --rc lcov_branch_coverage=1 00:03:31.952 --rc lcov_function_coverage=1 00:03:31.952 --rc genhtml_branch_coverage=1 00:03:31.952 --rc genhtml_function_coverage=1 00:03:31.952 --rc genhtml_legend=1 00:03:31.952 --rc geninfo_all_blocks=1 00:03:31.952 ' 00:03:31.952 11:07:13 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:31.952 --rc lcov_branch_coverage=1 00:03:31.952 --rc lcov_function_coverage=1 00:03:31.952 --rc genhtml_branch_coverage=1 00:03:31.952 --rc genhtml_function_coverage=1 00:03:31.952 --rc genhtml_legend=1 00:03:31.952 --rc geninfo_all_blocks=1 00:03:31.952 --no-external' 00:03:31.952 11:07:13 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:31.952 --rc lcov_branch_coverage=1 00:03:31.952 --rc lcov_function_coverage=1 00:03:31.952 --rc genhtml_branch_coverage=1 00:03:31.952 --rc genhtml_function_coverage=1 00:03:31.952 --rc genhtml_legend=1 00:03:31.952 --rc geninfo_all_blocks=1 00:03:31.952 --no-external' 00:03:31.952 11:07:13 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:31.952 lcov: LCOV version 1.15 00:03:31.952 11:07:13 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:40.069 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:40.069 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:40.069 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:40.069 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:40.069 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:40.069 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:58.156 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:58.156 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:58.156 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:58.156 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:58.156 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:58.156 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:58.156 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:58.157 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:58.157 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:58.158 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:58.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:01.488 11:07:42 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:01.488 11:07:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:01.488 11:07:42 -- common/autotest_common.sh@10 -- # set +x 00:04:01.488 11:07:42 -- spdk/autotest.sh@102 -- # rm -f 00:04:01.488 11:07:42 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.055 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:02.055 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:02.055 11:07:43 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:02.055 11:07:43 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:02.055 11:07:43 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:02.055 11:07:43 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:02.055 11:07:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.055 11:07:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:02.055 11:07:43 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:02.055 11:07:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:02.055 11:07:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.055 11:07:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.055 11:07:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:02.055 11:07:43 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:02.055 11:07:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:02.055 11:07:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.055 11:07:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.055 11:07:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:02.055 11:07:43 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:02.055 11:07:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:02.055 11:07:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.055 11:07:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.055 11:07:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:02.055 11:07:43 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:02.055 11:07:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:02.055 11:07:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.055 11:07:43 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:02.055 11:07:43 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:02.055 11:07:43 -- spdk/autotest.sh@121 -- # grep -v p 00:04:02.055 11:07:43 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:02.055 11:07:43 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:02.055 11:07:43 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:02.055 11:07:43 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:02.055 11:07:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:02.055 No valid GPT data, bailing 00:04:02.055 11:07:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:02.055 11:07:43 -- scripts/common.sh@393 -- # pt= 00:04:02.055 11:07:43 -- scripts/common.sh@394 -- # return 1 00:04:02.055 11:07:43 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:02.055 1+0 records in 00:04:02.055 1+0 records out 00:04:02.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416422 s, 252 MB/s 00:04:02.055 11:07:43 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:02.055 11:07:43 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:02.055 11:07:43 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:04:02.055 11:07:43 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:02.055 11:07:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:02.055 No valid GPT data, bailing 00:04:02.055 11:07:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:02.055 11:07:43 -- scripts/common.sh@393 -- # pt= 00:04:02.055 11:07:43 -- scripts/common.sh@394 -- # return 1 00:04:02.055 11:07:43 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:02.314 1+0 records in 00:04:02.314 1+0 records out 00:04:02.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450127 s, 233 MB/s 00:04:02.314 11:07:43 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:02.314 11:07:43 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:02.314 11:07:43 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:04:02.314 11:07:43 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:02.314 11:07:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:02.314 No valid GPT data, bailing 00:04:02.314 11:07:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:02.314 11:07:43 -- scripts/common.sh@393 -- # pt= 00:04:02.314 11:07:43 -- scripts/common.sh@394 -- # return 1 00:04:02.314 11:07:43 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:02.314 1+0 records in 00:04:02.314 1+0 records out 00:04:02.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00357488 s, 293 MB/s 00:04:02.314 11:07:43 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:02.314 11:07:43 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:02.314 11:07:43 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:04:02.314 11:07:43 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:02.314 11:07:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:02.314 No valid GPT data, bailing 00:04:02.314 11:07:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:02.314 11:07:43 -- scripts/common.sh@393 -- # pt= 00:04:02.314 11:07:43 -- scripts/common.sh@394 -- # return 1 00:04:02.314 11:07:43 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:02.314 1+0 records in 00:04:02.314 1+0 records out 00:04:02.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00375695 s, 279 MB/s 00:04:02.314 11:07:43 -- spdk/autotest.sh@129 -- # sync 00:04:02.882 11:07:44 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:02.882 11:07:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:02.882 11:07:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:04.786 11:07:46 -- spdk/autotest.sh@135 -- # uname -s 00:04:04.786 11:07:46 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:04.786 11:07:46 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:04.786 11:07:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:04.786 11:07:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:04.786 11:07:46 -- common/autotest_common.sh@10 -- # set +x 00:04:04.786 ************************************ 00:04:04.786 START TEST setup.sh 00:04:04.786 ************************************ 00:04:04.786 11:07:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:04.786 * Looking for test storage... 00:04:04.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:04.786 11:07:46 -- setup/test-setup.sh@10 -- # uname -s 00:04:04.786 11:07:46 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:04.786 11:07:46 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:04.786 11:07:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:04.786 11:07:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:04.786 11:07:46 -- common/autotest_common.sh@10 -- # set +x 00:04:04.786 ************************************ 00:04:04.786 START TEST acl 00:04:04.786 ************************************ 00:04:04.786 11:07:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:04.786 * Looking for test storage... 00:04:04.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:04.786 11:07:46 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:04.786 11:07:46 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:04.786 11:07:46 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:04.786 11:07:46 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:04.786 11:07:46 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:04.786 11:07:46 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:04.786 11:07:46 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:04.786 11:07:46 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:04.786 11:07:46 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:04.786 11:07:46 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:04.786 11:07:46 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:04.786 11:07:46 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:04.786 11:07:46 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:04.786 11:07:46 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:04.787 11:07:46 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:04.787 11:07:46 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:04.787 11:07:46 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:04.787 11:07:46 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:04.787 11:07:46 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:04.787 11:07:46 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:04.787 11:07:46 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:04.787 11:07:46 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:04.787 11:07:46 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:04.787 11:07:46 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:04.787 11:07:46 -- setup/acl.sh@12 -- # devs=() 00:04:04.787 11:07:46 -- setup/acl.sh@12 -- # declare -a devs 00:04:04.787 11:07:46 -- setup/acl.sh@13 -- # drivers=() 00:04:04.787 11:07:46 -- setup/acl.sh@13 -- # declare -A drivers 00:04:04.787 11:07:46 -- setup/acl.sh@51 -- # setup reset 00:04:04.787 11:07:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.787 11:07:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.355 11:07:46 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:05.355 11:07:46 -- setup/acl.sh@16 -- # local dev driver 00:04:05.355 11:07:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.355 11:07:46 -- setup/acl.sh@15 -- # setup output status 00:04:05.355 11:07:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.355 11:07:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:05.613 Hugepages 00:04:05.613 node hugesize free / total 00:04:05.613 11:07:47 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:05.613 11:07:47 -- setup/acl.sh@19 -- # continue 00:04:05.613 11:07:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.613 00:04:05.614 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:05.614 11:07:47 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:05.614 11:07:47 -- setup/acl.sh@19 -- # continue 00:04:05.614 11:07:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.614 11:07:47 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:05.614 11:07:47 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:05.614 11:07:47 -- setup/acl.sh@20 -- # continue 00:04:05.614 11:07:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.614 11:07:47 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:05.614 11:07:47 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:05.614 11:07:47 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:05.614 11:07:47 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:05.614 11:07:47 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:05.614 11:07:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.872 11:07:47 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:05.872 11:07:47 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:05.872 11:07:47 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:05.872 11:07:47 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:05.872 11:07:47 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:05.872 11:07:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.872 11:07:47 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:05.872 11:07:47 -- setup/acl.sh@54 -- # run_test denied denied 00:04:05.872 11:07:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:05.872 11:07:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:05.872 11:07:47 -- common/autotest_common.sh@10 -- # set +x 00:04:05.872 ************************************ 00:04:05.872 START TEST denied 00:04:05.872 ************************************ 00:04:05.872 11:07:47 -- common/autotest_common.sh@1104 -- # denied 00:04:05.872 11:07:47 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:05.872 11:07:47 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:05.872 11:07:47 -- setup/acl.sh@38 -- # setup output config 00:04:05.872 11:07:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.872 11:07:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:06.808 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:06.808 11:07:48 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:06.808 11:07:48 -- setup/acl.sh@28 -- # local dev driver 00:04:06.808 11:07:48 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:06.808 11:07:48 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:06.808 11:07:48 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:06.808 11:07:48 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:06.808 11:07:48 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:06.808 11:07:48 -- setup/acl.sh@41 -- # setup reset 00:04:06.808 11:07:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.809 11:07:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.375 00:04:07.376 real 0m1.439s 00:04:07.376 user 0m0.603s 00:04:07.376 sys 0m0.762s 00:04:07.376 ************************************ 00:04:07.376 END TEST denied 00:04:07.376 11:07:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.376 11:07:48 -- common/autotest_common.sh@10 -- # set +x 00:04:07.376 ************************************ 00:04:07.376 11:07:48 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:07.376 11:07:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.376 11:07:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.376 11:07:48 -- common/autotest_common.sh@10 -- # set +x 00:04:07.376 ************************************ 00:04:07.376 START TEST allowed 00:04:07.376 ************************************ 00:04:07.376 11:07:48 -- common/autotest_common.sh@1104 -- # allowed 00:04:07.376 11:07:48 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:07.376 11:07:48 -- setup/acl.sh@45 -- # setup output config 00:04:07.376 11:07:48 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:07.376 11:07:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.376 11:07:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:07.943 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.943 11:07:49 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:07.943 11:07:49 -- setup/acl.sh@28 -- # local dev driver 00:04:07.943 11:07:49 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:07.943 11:07:49 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:07.943 11:07:49 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:07.943 11:07:49 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:07.943 11:07:49 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:07.943 11:07:49 -- setup/acl.sh@48 -- # setup reset 00:04:07.943 11:07:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.943 11:07:49 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.880 00:04:08.880 real 0m1.502s 00:04:08.880 user 0m0.685s 00:04:08.880 sys 0m0.811s 00:04:08.880 11:07:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.880 ************************************ 00:04:08.880 END TEST allowed 00:04:08.880 11:07:50 -- common/autotest_common.sh@10 -- # set +x 00:04:08.880 ************************************ 00:04:08.880 ************************************ 00:04:08.880 END TEST acl 00:04:08.880 ************************************ 00:04:08.880 00:04:08.880 real 0m4.166s 00:04:08.880 user 0m1.810s 00:04:08.880 sys 0m2.301s 00:04:08.880 11:07:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.880 11:07:50 -- common/autotest_common.sh@10 -- # set +x 00:04:08.880 11:07:50 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:08.880 11:07:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:08.880 11:07:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.880 11:07:50 -- common/autotest_common.sh@10 -- # set +x 00:04:08.880 ************************************ 00:04:08.880 START TEST hugepages 00:04:08.880 ************************************ 00:04:08.880 11:07:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:08.880 * Looking for test storage... 00:04:08.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:08.880 11:07:50 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:08.880 11:07:50 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:08.880 11:07:50 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:08.880 11:07:50 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:08.880 11:07:50 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:08.880 11:07:50 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:08.880 11:07:50 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:08.880 11:07:50 -- setup/common.sh@18 -- # local node= 00:04:08.880 11:07:50 -- setup/common.sh@19 -- # local var val 00:04:08.880 11:07:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.880 11:07:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.880 11:07:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.880 11:07:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.880 11:07:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.880 11:07:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.880 11:07:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 5994672 kB' 'MemAvailable: 7371116 kB' 'Buffers: 2684 kB' 'Cached: 1590000 kB' 'SwapCached: 0 kB' 'Active: 441816 kB' 'Inactive: 1254344 kB' 'Active(anon): 113984 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254344 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 105096 kB' 'Mapped: 50692 kB' 'Shmem: 10508 kB' 'KReclaimable: 62428 kB' 'Slab: 155776 kB' 'SReclaimable: 62428 kB' 'SUnreclaim: 93348 kB' 'KernelStack: 6588 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 298292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.880 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.880 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # continue 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.881 11:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.881 11:07:50 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.881 11:07:50 -- setup/common.sh@33 -- # echo 2048 00:04:08.881 11:07:50 -- setup/common.sh@33 -- # return 0 00:04:08.881 11:07:50 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:08.881 11:07:50 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:08.881 11:07:50 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:09.141 11:07:50 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:09.141 11:07:50 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:09.141 11:07:50 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:09.141 11:07:50 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:09.141 11:07:50 -- setup/hugepages.sh@207 -- # get_nodes 00:04:09.141 11:07:50 -- setup/hugepages.sh@27 -- # local node 00:04:09.141 11:07:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.141 11:07:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:09.141 11:07:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:09.141 11:07:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.141 11:07:50 -- setup/hugepages.sh@208 -- # clear_hp 00:04:09.141 11:07:50 -- setup/hugepages.sh@37 -- # local node hp 00:04:09.141 11:07:50 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:09.141 11:07:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.141 11:07:50 -- setup/hugepages.sh@41 -- # echo 0 00:04:09.141 11:07:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.141 11:07:50 -- setup/hugepages.sh@41 -- # echo 0 00:04:09.141 11:07:50 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:09.141 11:07:50 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:09.141 11:07:50 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:09.141 11:07:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.141 11:07:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.141 11:07:50 -- common/autotest_common.sh@10 -- # set +x 00:04:09.141 ************************************ 00:04:09.141 START TEST default_setup 00:04:09.141 ************************************ 00:04:09.141 11:07:50 -- common/autotest_common.sh@1104 -- # default_setup 00:04:09.141 11:07:50 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:09.141 11:07:50 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:09.141 11:07:50 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:09.141 11:07:50 -- setup/hugepages.sh@51 -- # shift 00:04:09.141 11:07:50 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:09.141 11:07:50 -- setup/hugepages.sh@52 -- # local node_ids 00:04:09.141 11:07:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.141 11:07:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:09.141 11:07:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:09.141 11:07:50 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:09.141 11:07:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.141 11:07:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.141 11:07:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:09.141 11:07:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.141 11:07:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.141 11:07:50 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:09.141 11:07:50 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:09.141 11:07:50 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:09.141 11:07:50 -- setup/hugepages.sh@73 -- # return 0 00:04:09.141 11:07:50 -- setup/hugepages.sh@137 -- # setup output 00:04:09.141 11:07:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.141 11:07:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.709 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.709 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.709 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.971 11:07:51 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:09.971 11:07:51 -- setup/hugepages.sh@89 -- # local node 00:04:09.971 11:07:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.971 11:07:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.971 11:07:51 -- setup/hugepages.sh@92 -- # local surp 00:04:09.971 11:07:51 -- setup/hugepages.sh@93 -- # local resv 00:04:09.971 11:07:51 -- setup/hugepages.sh@94 -- # local anon 00:04:09.971 11:07:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.971 11:07:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.971 11:07:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.971 11:07:51 -- setup/common.sh@18 -- # local node= 00:04:09.971 11:07:51 -- setup/common.sh@19 -- # local var val 00:04:09.971 11:07:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.971 11:07:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.971 11:07:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.971 11:07:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.971 11:07:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.971 11:07:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:07:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8091380 kB' 'MemAvailable: 9467628 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 456792 kB' 'Inactive: 1254348 kB' 'Active(anon): 128960 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254348 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120156 kB' 'Mapped: 50796 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155380 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93352 kB' 'KernelStack: 6544 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.971 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.971 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.972 11:07:51 -- setup/common.sh@33 -- # echo 0 00:04:09.972 11:07:51 -- setup/common.sh@33 -- # return 0 00:04:09.972 11:07:51 -- setup/hugepages.sh@97 -- # anon=0 00:04:09.972 11:07:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.972 11:07:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.972 11:07:51 -- setup/common.sh@18 -- # local node= 00:04:09.972 11:07:51 -- setup/common.sh@19 -- # local var val 00:04:09.972 11:07:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.972 11:07:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.972 11:07:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.972 11:07:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.972 11:07:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.972 11:07:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8091380 kB' 'MemAvailable: 9467628 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 456616 kB' 'Inactive: 1254348 kB' 'Active(anon): 128784 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254348 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119756 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155376 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93348 kB' 'KernelStack: 6544 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.972 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.972 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.973 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.973 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.974 11:07:51 -- setup/common.sh@33 -- # echo 0 00:04:09.974 11:07:51 -- setup/common.sh@33 -- # return 0 00:04:09.974 11:07:51 -- setup/hugepages.sh@99 -- # surp=0 00:04:09.974 11:07:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.974 11:07:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.974 11:07:51 -- setup/common.sh@18 -- # local node= 00:04:09.974 11:07:51 -- setup/common.sh@19 -- # local var val 00:04:09.974 11:07:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.974 11:07:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.974 11:07:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.974 11:07:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.974 11:07:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.974 11:07:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8091380 kB' 'MemAvailable: 9467628 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 456172 kB' 'Inactive: 1254348 kB' 'Active(anon): 128340 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254348 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119460 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155376 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93348 kB' 'KernelStack: 6528 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.974 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.974 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.975 11:07:51 -- setup/common.sh@33 -- # echo 0 00:04:09.975 11:07:51 -- setup/common.sh@33 -- # return 0 00:04:09.975 11:07:51 -- setup/hugepages.sh@100 -- # resv=0 00:04:09.975 nr_hugepages=1024 00:04:09.975 11:07:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.975 resv_hugepages=0 00:04:09.975 11:07:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.975 surplus_hugepages=0 00:04:09.975 11:07:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.975 anon_hugepages=0 00:04:09.975 11:07:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.975 11:07:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.975 11:07:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.975 11:07:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.975 11:07:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.975 11:07:51 -- setup/common.sh@18 -- # local node= 00:04:09.975 11:07:51 -- setup/common.sh@19 -- # local var val 00:04:09.975 11:07:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.975 11:07:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.975 11:07:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.975 11:07:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.975 11:07:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.975 11:07:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8091380 kB' 'MemAvailable: 9467628 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 455980 kB' 'Inactive: 1254348 kB' 'Active(anon): 128148 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254348 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119256 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155376 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93348 kB' 'KernelStack: 6528 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.975 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.975 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.976 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.976 11:07:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.977 11:07:51 -- setup/common.sh@33 -- # echo 1024 00:04:09.977 11:07:51 -- setup/common.sh@33 -- # return 0 00:04:09.977 11:07:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.977 11:07:51 -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.977 11:07:51 -- setup/hugepages.sh@27 -- # local node 00:04:09.977 11:07:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.977 11:07:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.977 11:07:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:09.977 11:07:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.977 11:07:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.977 11:07:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.977 11:07:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.977 11:07:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.977 11:07:51 -- setup/common.sh@18 -- # local node=0 00:04:09.977 11:07:51 -- setup/common.sh@19 -- # local var val 00:04:09.977 11:07:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.977 11:07:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.977 11:07:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.977 11:07:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.977 11:07:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.977 11:07:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8090876 kB' 'MemUsed: 4148236 kB' 'SwapCached: 0 kB' 'Active: 456224 kB' 'Inactive: 1254348 kB' 'Active(anon): 128392 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254348 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 1592676 kB' 'Mapped: 50708 kB' 'AnonPages: 119500 kB' 'Shmem: 10484 kB' 'KernelStack: 6528 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62028 kB' 'Slab: 155372 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.977 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.977 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.978 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:07:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.978 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:07:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.978 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:07:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.978 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.978 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:07:51 -- setup/common.sh@32 -- # continue 00:04:09.978 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.978 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.978 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.978 11:07:51 -- setup/common.sh@33 -- # echo 0 00:04:09.978 11:07:51 -- setup/common.sh@33 -- # return 0 00:04:09.978 11:07:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.978 11:07:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.978 11:07:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.978 11:07:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.978 node0=1024 expecting 1024 00:04:09.978 11:07:51 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:09.978 11:07:51 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.978 00:04:09.978 real 0m0.995s 00:04:09.978 user 0m0.497s 00:04:09.978 sys 0m0.448s 00:04:09.978 11:07:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.978 11:07:51 -- common/autotest_common.sh@10 -- # set +x 00:04:09.978 ************************************ 00:04:09.978 END TEST default_setup 00:04:09.978 ************************************ 00:04:09.978 11:07:51 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:09.978 11:07:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.978 11:07:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.978 11:07:51 -- common/autotest_common.sh@10 -- # set +x 00:04:09.978 ************************************ 00:04:09.978 START TEST per_node_1G_alloc 00:04:09.978 ************************************ 00:04:09.978 11:07:51 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:09.978 11:07:51 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:09.978 11:07:51 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:09.978 11:07:51 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:09.978 11:07:51 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:09.978 11:07:51 -- setup/hugepages.sh@51 -- # shift 00:04:09.978 11:07:51 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:09.978 11:07:51 -- setup/hugepages.sh@52 -- # local node_ids 00:04:09.978 11:07:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.978 11:07:51 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:09.978 11:07:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:09.978 11:07:51 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:09.978 11:07:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.978 11:07:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:09.978 11:07:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:09.978 11:07:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.978 11:07:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.978 11:07:51 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:09.978 11:07:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:09.978 11:07:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:09.978 11:07:51 -- setup/hugepages.sh@73 -- # return 0 00:04:09.978 11:07:51 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:09.978 11:07:51 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:09.978 11:07:51 -- setup/hugepages.sh@146 -- # setup output 00:04:09.978 11:07:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.978 11:07:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.550 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.550 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.550 11:07:51 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:10.550 11:07:51 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:10.550 11:07:51 -- setup/hugepages.sh@89 -- # local node 00:04:10.550 11:07:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.550 11:07:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.550 11:07:51 -- setup/hugepages.sh@92 -- # local surp 00:04:10.550 11:07:51 -- setup/hugepages.sh@93 -- # local resv 00:04:10.550 11:07:51 -- setup/hugepages.sh@94 -- # local anon 00:04:10.550 11:07:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.550 11:07:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.550 11:07:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.550 11:07:51 -- setup/common.sh@18 -- # local node= 00:04:10.550 11:07:51 -- setup/common.sh@19 -- # local var val 00:04:10.550 11:07:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.550 11:07:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.550 11:07:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.550 11:07:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.550 11:07:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.550 11:07:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9141412 kB' 'MemAvailable: 10517672 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 456556 kB' 'Inactive: 1254360 kB' 'Active(anon): 128724 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119856 kB' 'Mapped: 50772 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155336 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93308 kB' 'KernelStack: 6488 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.550 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.550 11:07:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.551 11:07:51 -- setup/common.sh@33 -- # echo 0 00:04:10.551 11:07:51 -- setup/common.sh@33 -- # return 0 00:04:10.551 11:07:51 -- setup/hugepages.sh@97 -- # anon=0 00:04:10.551 11:07:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.551 11:07:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.551 11:07:51 -- setup/common.sh@18 -- # local node= 00:04:10.551 11:07:51 -- setup/common.sh@19 -- # local var val 00:04:10.551 11:07:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.551 11:07:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.551 11:07:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.551 11:07:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.551 11:07:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.551 11:07:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9141412 kB' 'MemAvailable: 10517672 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 456040 kB' 'Inactive: 1254360 kB' 'Active(anon): 128208 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119328 kB' 'Mapped: 50824 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155364 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93336 kB' 'KernelStack: 6504 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.551 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.551 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.552 11:07:51 -- setup/common.sh@33 -- # echo 0 00:04:10.552 11:07:51 -- setup/common.sh@33 -- # return 0 00:04:10.552 11:07:51 -- setup/hugepages.sh@99 -- # surp=0 00:04:10.552 11:07:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.552 11:07:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.552 11:07:51 -- setup/common.sh@18 -- # local node= 00:04:10.552 11:07:51 -- setup/common.sh@19 -- # local var val 00:04:10.552 11:07:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.552 11:07:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.552 11:07:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.552 11:07:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.552 11:07:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.552 11:07:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9141412 kB' 'MemAvailable: 10517672 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 456236 kB' 'Inactive: 1254360 kB' 'Active(anon): 128404 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119520 kB' 'Mapped: 50824 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155360 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93332 kB' 'KernelStack: 6488 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.552 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.552 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:51 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.553 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.553 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.554 11:07:52 -- setup/common.sh@33 -- # echo 0 00:04:10.554 11:07:52 -- setup/common.sh@33 -- # return 0 00:04:10.554 11:07:52 -- setup/hugepages.sh@100 -- # resv=0 00:04:10.554 nr_hugepages=512 00:04:10.554 11:07:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:10.554 resv_hugepages=0 00:04:10.554 11:07:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.554 surplus_hugepages=0 00:04:10.554 11:07:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.554 anon_hugepages=0 00:04:10.554 11:07:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.554 11:07:52 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:10.554 11:07:52 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:10.554 11:07:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.554 11:07:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.554 11:07:52 -- setup/common.sh@18 -- # local node= 00:04:10.554 11:07:52 -- setup/common.sh@19 -- # local var val 00:04:10.554 11:07:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.554 11:07:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.554 11:07:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.554 11:07:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.554 11:07:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.554 11:07:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9141412 kB' 'MemAvailable: 10517672 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 456436 kB' 'Inactive: 1254360 kB' 'Active(anon): 128604 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119740 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155384 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93356 kB' 'KernelStack: 6528 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.554 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.554 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.555 11:07:52 -- setup/common.sh@33 -- # echo 512 00:04:10.555 11:07:52 -- setup/common.sh@33 -- # return 0 00:04:10.555 11:07:52 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:10.555 11:07:52 -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.555 11:07:52 -- setup/hugepages.sh@27 -- # local node 00:04:10.555 11:07:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.555 11:07:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.555 11:07:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:10.555 11:07:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.555 11:07:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.555 11:07:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.555 11:07:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.555 11:07:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.555 11:07:52 -- setup/common.sh@18 -- # local node=0 00:04:10.555 11:07:52 -- setup/common.sh@19 -- # local var val 00:04:10.555 11:07:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.555 11:07:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.555 11:07:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.555 11:07:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.555 11:07:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.555 11:07:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9141412 kB' 'MemUsed: 3097700 kB' 'SwapCached: 0 kB' 'Active: 456092 kB' 'Inactive: 1254360 kB' 'Active(anon): 128260 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 1592676 kB' 'Mapped: 50708 kB' 'AnonPages: 119392 kB' 'Shmem: 10484 kB' 'KernelStack: 6512 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62028 kB' 'Slab: 155372 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.555 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.555 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # continue 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.556 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.556 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.556 11:07:52 -- setup/common.sh@33 -- # echo 0 00:04:10.556 11:07:52 -- setup/common.sh@33 -- # return 0 00:04:10.556 11:07:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.556 11:07:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.556 11:07:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.556 11:07:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.556 node0=512 expecting 512 00:04:10.556 11:07:52 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:10.556 11:07:52 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:10.556 00:04:10.556 real 0m0.538s 00:04:10.556 user 0m0.288s 00:04:10.556 sys 0m0.283s 00:04:10.556 11:07:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.556 11:07:52 -- common/autotest_common.sh@10 -- # set +x 00:04:10.556 ************************************ 00:04:10.556 END TEST per_node_1G_alloc 00:04:10.556 ************************************ 00:04:10.556 11:07:52 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:10.556 11:07:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.556 11:07:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.556 11:07:52 -- common/autotest_common.sh@10 -- # set +x 00:04:10.556 ************************************ 00:04:10.556 START TEST even_2G_alloc 00:04:10.556 ************************************ 00:04:10.556 11:07:52 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:10.556 11:07:52 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:10.556 11:07:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:10.556 11:07:52 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.556 11:07:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.556 11:07:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:10.556 11:07:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.556 11:07:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.556 11:07:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.556 11:07:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.556 11:07:52 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:10.556 11:07:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.556 11:07:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.556 11:07:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.556 11:07:52 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.556 11:07:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.556 11:07:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:10.556 11:07:52 -- setup/hugepages.sh@83 -- # : 0 00:04:10.556 11:07:52 -- setup/hugepages.sh@84 -- # : 0 00:04:10.556 11:07:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.556 11:07:52 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:10.556 11:07:52 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:10.556 11:07:52 -- setup/hugepages.sh@153 -- # setup output 00:04:10.556 11:07:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.556 11:07:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.129 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.129 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.129 11:07:52 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:11.129 11:07:52 -- setup/hugepages.sh@89 -- # local node 00:04:11.129 11:07:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.129 11:07:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.129 11:07:52 -- setup/hugepages.sh@92 -- # local surp 00:04:11.129 11:07:52 -- setup/hugepages.sh@93 -- # local resv 00:04:11.129 11:07:52 -- setup/hugepages.sh@94 -- # local anon 00:04:11.129 11:07:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.129 11:07:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.129 11:07:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.129 11:07:52 -- setup/common.sh@18 -- # local node= 00:04:11.129 11:07:52 -- setup/common.sh@19 -- # local var val 00:04:11.129 11:07:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.129 11:07:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.129 11:07:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.129 11:07:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.129 11:07:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.129 11:07:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8088304 kB' 'MemAvailable: 9464564 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 456836 kB' 'Inactive: 1254360 kB' 'Active(anon): 129004 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119888 kB' 'Mapped: 51096 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155380 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93352 kB' 'KernelStack: 6580 kB' 'PageTables: 4668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 11:07:52 -- setup/common.sh@33 -- # echo 0 00:04:11.130 11:07:52 -- setup/common.sh@33 -- # return 0 00:04:11.130 11:07:52 -- setup/hugepages.sh@97 -- # anon=0 00:04:11.130 11:07:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.130 11:07:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.130 11:07:52 -- setup/common.sh@18 -- # local node= 00:04:11.130 11:07:52 -- setup/common.sh@19 -- # local var val 00:04:11.130 11:07:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.130 11:07:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.130 11:07:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.130 11:07:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.130 11:07:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.130 11:07:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8091720 kB' 'MemAvailable: 9467980 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 456160 kB' 'Inactive: 1254360 kB' 'Active(anon): 128328 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119488 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155348 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93320 kB' 'KernelStack: 6512 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.130 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 11:07:52 -- setup/common.sh@33 -- # echo 0 00:04:11.131 11:07:52 -- setup/common.sh@33 -- # return 0 00:04:11.131 11:07:52 -- setup/hugepages.sh@99 -- # surp=0 00:04:11.131 11:07:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.131 11:07:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.131 11:07:52 -- setup/common.sh@18 -- # local node= 00:04:11.131 11:07:52 -- setup/common.sh@19 -- # local var val 00:04:11.131 11:07:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.131 11:07:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.131 11:07:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.131 11:07:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.131 11:07:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.131 11:07:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8091724 kB' 'MemAvailable: 9467984 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 456268 kB' 'Inactive: 1254360 kB' 'Active(anon): 128436 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119592 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155332 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93304 kB' 'KernelStack: 6512 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.131 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.131 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 11:07:52 -- setup/common.sh@33 -- # echo 0 00:04:11.132 11:07:52 -- setup/common.sh@33 -- # return 0 00:04:11.132 11:07:52 -- setup/hugepages.sh@100 -- # resv=0 00:04:11.132 nr_hugepages=1024 00:04:11.132 11:07:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.132 resv_hugepages=0 00:04:11.133 11:07:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.133 surplus_hugepages=0 00:04:11.133 11:07:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.133 anon_hugepages=0 00:04:11.133 11:07:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.133 11:07:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.133 11:07:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.133 11:07:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.133 11:07:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.133 11:07:52 -- setup/common.sh@18 -- # local node= 00:04:11.133 11:07:52 -- setup/common.sh@19 -- # local var val 00:04:11.133 11:07:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.133 11:07:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.133 11:07:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.133 11:07:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.133 11:07:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.133 11:07:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8091724 kB' 'MemAvailable: 9467984 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 455968 kB' 'Inactive: 1254360 kB' 'Active(anon): 128136 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119292 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155332 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93304 kB' 'KernelStack: 6512 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 11:07:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 11:07:52 -- setup/common.sh@33 -- # echo 1024 00:04:11.134 11:07:52 -- setup/common.sh@33 -- # return 0 00:04:11.134 11:07:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.134 11:07:52 -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.134 11:07:52 -- setup/hugepages.sh@27 -- # local node 00:04:11.134 11:07:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.134 11:07:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.134 11:07:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.134 11:07:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.134 11:07:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.134 11:07:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.134 11:07:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.134 11:07:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.134 11:07:52 -- setup/common.sh@18 -- # local node=0 00:04:11.134 11:07:52 -- setup/common.sh@19 -- # local var val 00:04:11.134 11:07:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.134 11:07:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.134 11:07:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.134 11:07:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.134 11:07:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.134 11:07:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8091984 kB' 'MemUsed: 4147128 kB' 'SwapCached: 0 kB' 'Active: 456228 kB' 'Inactive: 1254360 kB' 'Active(anon): 128396 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 1592676 kB' 'Mapped: 50708 kB' 'AnonPages: 119552 kB' 'Shmem: 10484 kB' 'KernelStack: 6512 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62028 kB' 'Slab: 155332 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.134 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # continue 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 11:07:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 11:07:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 11:07:52 -- setup/common.sh@33 -- # echo 0 00:04:11.135 11:07:52 -- setup/common.sh@33 -- # return 0 00:04:11.135 11:07:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.135 11:07:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.135 11:07:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.135 11:07:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.135 node0=1024 expecting 1024 00:04:11.135 11:07:52 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.135 11:07:52 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.135 00:04:11.135 real 0m0.537s 00:04:11.135 user 0m0.263s 00:04:11.135 sys 0m0.305s 00:04:11.135 11:07:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.135 11:07:52 -- common/autotest_common.sh@10 -- # set +x 00:04:11.135 ************************************ 00:04:11.135 END TEST even_2G_alloc 00:04:11.135 ************************************ 00:04:11.135 11:07:52 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:11.135 11:07:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.135 11:07:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.135 11:07:52 -- common/autotest_common.sh@10 -- # set +x 00:04:11.135 ************************************ 00:04:11.135 START TEST odd_alloc 00:04:11.135 ************************************ 00:04:11.135 11:07:52 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:11.135 11:07:52 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:11.135 11:07:52 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:11.135 11:07:52 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.135 11:07:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.135 11:07:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:11.135 11:07:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.135 11:07:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.135 11:07:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.135 11:07:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:11.135 11:07:52 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.394 11:07:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.394 11:07:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.394 11:07:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.394 11:07:52 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:11.394 11:07:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.394 11:07:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:11.394 11:07:52 -- setup/hugepages.sh@83 -- # : 0 00:04:11.394 11:07:52 -- setup/hugepages.sh@84 -- # : 0 00:04:11.394 11:07:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.394 11:07:52 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:11.394 11:07:52 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:11.394 11:07:52 -- setup/hugepages.sh@160 -- # setup output 00:04:11.394 11:07:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.394 11:07:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.657 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.657 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.657 11:07:53 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:11.657 11:07:53 -- setup/hugepages.sh@89 -- # local node 00:04:11.657 11:07:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.657 11:07:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.657 11:07:53 -- setup/hugepages.sh@92 -- # local surp 00:04:11.657 11:07:53 -- setup/hugepages.sh@93 -- # local resv 00:04:11.657 11:07:53 -- setup/hugepages.sh@94 -- # local anon 00:04:11.657 11:07:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.657 11:07:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.657 11:07:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.657 11:07:53 -- setup/common.sh@18 -- # local node= 00:04:11.657 11:07:53 -- setup/common.sh@19 -- # local var val 00:04:11.657 11:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.657 11:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.657 11:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.657 11:07:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.657 11:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.657 11:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8090704 kB' 'MemAvailable: 9466964 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 456576 kB' 'Inactive: 1254360 kB' 'Active(anon): 128744 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119868 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155336 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93308 kB' 'KernelStack: 6536 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.657 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.657 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.658 11:07:53 -- setup/common.sh@33 -- # echo 0 00:04:11.658 11:07:53 -- setup/common.sh@33 -- # return 0 00:04:11.658 11:07:53 -- setup/hugepages.sh@97 -- # anon=0 00:04:11.658 11:07:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.658 11:07:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.658 11:07:53 -- setup/common.sh@18 -- # local node= 00:04:11.658 11:07:53 -- setup/common.sh@19 -- # local var val 00:04:11.658 11:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.658 11:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.658 11:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.658 11:07:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.658 11:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.658 11:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8090704 kB' 'MemAvailable: 9466964 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 456268 kB' 'Inactive: 1254360 kB' 'Active(anon): 128436 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119588 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155340 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93312 kB' 'KernelStack: 6528 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.658 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.658 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.659 11:07:53 -- setup/common.sh@33 -- # echo 0 00:04:11.659 11:07:53 -- setup/common.sh@33 -- # return 0 00:04:11.659 11:07:53 -- setup/hugepages.sh@99 -- # surp=0 00:04:11.659 11:07:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.659 11:07:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.659 11:07:53 -- setup/common.sh@18 -- # local node= 00:04:11.659 11:07:53 -- setup/common.sh@19 -- # local var val 00:04:11.659 11:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.659 11:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.659 11:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.659 11:07:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.659 11:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.659 11:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8090704 kB' 'MemAvailable: 9466964 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 456332 kB' 'Inactive: 1254360 kB' 'Active(anon): 128500 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119616 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155340 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93312 kB' 'KernelStack: 6512 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.659 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.659 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.660 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.660 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.661 11:07:53 -- setup/common.sh@33 -- # echo 0 00:04:11.661 11:07:53 -- setup/common.sh@33 -- # return 0 00:04:11.661 11:07:53 -- setup/hugepages.sh@100 -- # resv=0 00:04:11.661 11:07:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:11.661 nr_hugepages=1025 00:04:11.661 resv_hugepages=0 00:04:11.661 11:07:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.661 surplus_hugepages=0 00:04:11.661 11:07:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.661 anon_hugepages=0 00:04:11.661 11:07:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.661 11:07:53 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:11.661 11:07:53 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:11.661 11:07:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.661 11:07:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.661 11:07:53 -- setup/common.sh@18 -- # local node= 00:04:11.661 11:07:53 -- setup/common.sh@19 -- # local var val 00:04:11.661 11:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.661 11:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.661 11:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.661 11:07:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.661 11:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.661 11:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8090704 kB' 'MemAvailable: 9466964 kB' 'Buffers: 2684 kB' 'Cached: 1589992 kB' 'SwapCached: 0 kB' 'Active: 456224 kB' 'Inactive: 1254360 kB' 'Active(anon): 128392 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119504 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155336 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93308 kB' 'KernelStack: 6512 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.661 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.661 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.662 11:07:53 -- setup/common.sh@33 -- # echo 1025 00:04:11.662 11:07:53 -- setup/common.sh@33 -- # return 0 00:04:11.662 11:07:53 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:11.662 11:07:53 -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.662 11:07:53 -- setup/hugepages.sh@27 -- # local node 00:04:11.662 11:07:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.662 11:07:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:11.662 11:07:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.662 11:07:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.662 11:07:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.662 11:07:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.662 11:07:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.662 11:07:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.662 11:07:53 -- setup/common.sh@18 -- # local node=0 00:04:11.662 11:07:53 -- setup/common.sh@19 -- # local var val 00:04:11.662 11:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.662 11:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.662 11:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.662 11:07:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.662 11:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.662 11:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8091068 kB' 'MemUsed: 4148044 kB' 'SwapCached: 0 kB' 'Active: 456224 kB' 'Inactive: 1254360 kB' 'Active(anon): 128392 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1592676 kB' 'Mapped: 50708 kB' 'AnonPages: 119552 kB' 'Shmem: 10484 kB' 'KernelStack: 6512 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62028 kB' 'Slab: 155336 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.662 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.662 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.663 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.663 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.663 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.922 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.922 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.923 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.923 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.923 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.923 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.923 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.923 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # continue 00:04:11.923 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.923 11:07:53 -- setup/common.sh@33 -- # echo 0 00:04:11.923 11:07:53 -- setup/common.sh@33 -- # return 0 00:04:11.923 11:07:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.923 11:07:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.923 11:07:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.923 11:07:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.923 node0=1025 expecting 1025 00:04:11.923 11:07:53 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:11.923 11:07:53 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:11.923 00:04:11.923 real 0m0.544s 00:04:11.923 user 0m0.277s 00:04:11.923 sys 0m0.300s 00:04:11.923 11:07:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.923 11:07:53 -- common/autotest_common.sh@10 -- # set +x 00:04:11.923 ************************************ 00:04:11.923 END TEST odd_alloc 00:04:11.923 ************************************ 00:04:11.923 11:07:53 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:11.923 11:07:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.923 11:07:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.923 11:07:53 -- common/autotest_common.sh@10 -- # set +x 00:04:11.923 ************************************ 00:04:11.923 START TEST custom_alloc 00:04:11.923 ************************************ 00:04:11.923 11:07:53 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:11.923 11:07:53 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:11.923 11:07:53 -- setup/hugepages.sh@169 -- # local node 00:04:11.923 11:07:53 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:11.923 11:07:53 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:11.923 11:07:53 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:11.923 11:07:53 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:11.923 11:07:53 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:11.923 11:07:53 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.923 11:07:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.923 11:07:53 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:11.923 11:07:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.923 11:07:53 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.923 11:07:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.923 11:07:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:11.923 11:07:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.923 11:07:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.923 11:07:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.923 11:07:53 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.923 11:07:53 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:11.923 11:07:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.923 11:07:53 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:11.923 11:07:53 -- setup/hugepages.sh@83 -- # : 0 00:04:11.923 11:07:53 -- setup/hugepages.sh@84 -- # : 0 00:04:11.923 11:07:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.923 11:07:53 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:11.923 11:07:53 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:11.923 11:07:53 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:11.923 11:07:53 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:11.923 11:07:53 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:11.923 11:07:53 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:11.923 11:07:53 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.923 11:07:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.923 11:07:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:11.923 11:07:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.923 11:07:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.923 11:07:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.923 11:07:53 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.923 11:07:53 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:11.923 11:07:53 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:11.923 11:07:53 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:11.923 11:07:53 -- setup/hugepages.sh@78 -- # return 0 00:04:11.923 11:07:53 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:11.923 11:07:53 -- setup/hugepages.sh@187 -- # setup output 00:04:11.923 11:07:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.923 11:07:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.184 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.184 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.184 11:07:53 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:12.184 11:07:53 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:12.184 11:07:53 -- setup/hugepages.sh@89 -- # local node 00:04:12.184 11:07:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.184 11:07:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.184 11:07:53 -- setup/hugepages.sh@92 -- # local surp 00:04:12.184 11:07:53 -- setup/hugepages.sh@93 -- # local resv 00:04:12.184 11:07:53 -- setup/hugepages.sh@94 -- # local anon 00:04:12.184 11:07:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.184 11:07:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.184 11:07:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.184 11:07:53 -- setup/common.sh@18 -- # local node= 00:04:12.184 11:07:53 -- setup/common.sh@19 -- # local var val 00:04:12.184 11:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.184 11:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.184 11:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.184 11:07:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.184 11:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.184 11:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.184 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.184 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.184 11:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9141288 kB' 'MemAvailable: 10517552 kB' 'Buffers: 2684 kB' 'Cached: 1589996 kB' 'SwapCached: 0 kB' 'Active: 456648 kB' 'Inactive: 1254364 kB' 'Active(anon): 128816 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119996 kB' 'Mapped: 51008 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155372 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93344 kB' 'KernelStack: 6552 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:12.184 11:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.184 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.184 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.184 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.184 11:07:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.184 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.184 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.184 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.184 11:07:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.184 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.184 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.184 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.184 11:07:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.184 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.184 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.184 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.184 11:07:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.184 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.184 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.184 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.184 11:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.184 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.184 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.184 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.185 11:07:53 -- setup/common.sh@33 -- # echo 0 00:04:12.185 11:07:53 -- setup/common.sh@33 -- # return 0 00:04:12.185 11:07:53 -- setup/hugepages.sh@97 -- # anon=0 00:04:12.185 11:07:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.185 11:07:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.185 11:07:53 -- setup/common.sh@18 -- # local node= 00:04:12.185 11:07:53 -- setup/common.sh@19 -- # local var val 00:04:12.185 11:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.185 11:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.185 11:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.185 11:07:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.185 11:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.185 11:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.185 11:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9141288 kB' 'MemAvailable: 10517552 kB' 'Buffers: 2684 kB' 'Cached: 1589996 kB' 'SwapCached: 0 kB' 'Active: 456020 kB' 'Inactive: 1254364 kB' 'Active(anon): 128188 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119344 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155388 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93360 kB' 'KernelStack: 6528 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.185 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.185 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.186 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.186 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.187 11:07:53 -- setup/common.sh@33 -- # echo 0 00:04:12.187 11:07:53 -- setup/common.sh@33 -- # return 0 00:04:12.187 11:07:53 -- setup/hugepages.sh@99 -- # surp=0 00:04:12.187 11:07:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.187 11:07:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.187 11:07:53 -- setup/common.sh@18 -- # local node= 00:04:12.187 11:07:53 -- setup/common.sh@19 -- # local var val 00:04:12.187 11:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.187 11:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.187 11:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.187 11:07:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.187 11:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.187 11:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9141288 kB' 'MemAvailable: 10517552 kB' 'Buffers: 2684 kB' 'Cached: 1589996 kB' 'SwapCached: 0 kB' 'Active: 456012 kB' 'Inactive: 1254364 kB' 'Active(anon): 128180 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119340 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155384 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93356 kB' 'KernelStack: 6528 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.187 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.187 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.188 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.188 11:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.188 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.188 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.188 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.188 11:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.188 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.188 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.188 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.188 11:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.188 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.188 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.188 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.188 11:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.188 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.188 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.188 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.188 11:07:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.188 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.188 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.188 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.188 11:07:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.188 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.188 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.448 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.448 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.449 11:07:53 -- setup/common.sh@33 -- # echo 0 00:04:12.449 11:07:53 -- setup/common.sh@33 -- # return 0 00:04:12.449 11:07:53 -- setup/hugepages.sh@100 -- # resv=0 00:04:12.449 nr_hugepages=512 00:04:12.449 11:07:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:12.449 resv_hugepages=0 00:04:12.449 11:07:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.449 surplus_hugepages=0 00:04:12.449 11:07:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.449 anon_hugepages=0 00:04:12.449 11:07:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.449 11:07:53 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:12.449 11:07:53 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:12.449 11:07:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.449 11:07:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.449 11:07:53 -- setup/common.sh@18 -- # local node= 00:04:12.449 11:07:53 -- setup/common.sh@19 -- # local var val 00:04:12.449 11:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.449 11:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.449 11:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.449 11:07:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.449 11:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.449 11:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9141288 kB' 'MemAvailable: 10517552 kB' 'Buffers: 2684 kB' 'Cached: 1589996 kB' 'SwapCached: 0 kB' 'Active: 456256 kB' 'Inactive: 1254364 kB' 'Active(anon): 128424 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119588 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155380 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93352 kB' 'KernelStack: 6528 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.449 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.450 11:07:53 -- setup/common.sh@33 -- # echo 512 00:04:12.450 11:07:53 -- setup/common.sh@33 -- # return 0 00:04:12.450 11:07:53 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:12.450 11:07:53 -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.450 11:07:53 -- setup/hugepages.sh@27 -- # local node 00:04:12.450 11:07:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.450 11:07:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.450 11:07:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.450 11:07:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.450 11:07:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.450 11:07:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.450 11:07:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.450 11:07:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.450 11:07:53 -- setup/common.sh@18 -- # local node=0 00:04:12.450 11:07:53 -- setup/common.sh@19 -- # local var val 00:04:12.450 11:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.450 11:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.450 11:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.450 11:07:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.450 11:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.450 11:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9141288 kB' 'MemUsed: 3097824 kB' 'SwapCached: 0 kB' 'Active: 456220 kB' 'Inactive: 1254364 kB' 'Active(anon): 128388 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1592680 kB' 'Mapped: 50708 kB' 'AnonPages: 119504 kB' 'Shmem: 10484 kB' 'KernelStack: 6512 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62028 kB' 'Slab: 155372 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # continue 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 11:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 11:07:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.451 11:07:53 -- setup/common.sh@33 -- # echo 0 00:04:12.451 11:07:53 -- setup/common.sh@33 -- # return 0 00:04:12.451 11:07:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.451 11:07:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.451 11:07:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.451 11:07:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.451 node0=512 expecting 512 00:04:12.451 11:07:53 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:12.451 11:07:53 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:12.451 00:04:12.451 real 0m0.535s 00:04:12.451 user 0m0.265s 00:04:12.451 sys 0m0.305s 00:04:12.451 11:07:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.451 11:07:53 -- common/autotest_common.sh@10 -- # set +x 00:04:12.451 ************************************ 00:04:12.451 END TEST custom_alloc 00:04:12.451 ************************************ 00:04:12.451 11:07:53 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:12.451 11:07:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:12.451 11:07:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:12.451 11:07:53 -- common/autotest_common.sh@10 -- # set +x 00:04:12.451 ************************************ 00:04:12.451 START TEST no_shrink_alloc 00:04:12.451 ************************************ 00:04:12.451 11:07:53 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:12.451 11:07:53 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:12.451 11:07:53 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:12.451 11:07:53 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:12.451 11:07:53 -- setup/hugepages.sh@51 -- # shift 00:04:12.451 11:07:53 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:12.451 11:07:53 -- setup/hugepages.sh@52 -- # local node_ids 00:04:12.451 11:07:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.451 11:07:53 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:12.451 11:07:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:12.451 11:07:53 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:12.451 11:07:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.451 11:07:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:12.451 11:07:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:12.451 11:07:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.451 11:07:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.451 11:07:53 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:12.451 11:07:53 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:12.451 11:07:53 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:12.451 11:07:53 -- setup/hugepages.sh@73 -- # return 0 00:04:12.451 11:07:53 -- setup/hugepages.sh@198 -- # setup output 00:04:12.451 11:07:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.451 11:07:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.711 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.711 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.711 11:07:54 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:12.711 11:07:54 -- setup/hugepages.sh@89 -- # local node 00:04:12.711 11:07:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.711 11:07:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.711 11:07:54 -- setup/hugepages.sh@92 -- # local surp 00:04:12.711 11:07:54 -- setup/hugepages.sh@93 -- # local resv 00:04:12.711 11:07:54 -- setup/hugepages.sh@94 -- # local anon 00:04:12.711 11:07:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.711 11:07:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.711 11:07:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.711 11:07:54 -- setup/common.sh@18 -- # local node= 00:04:12.711 11:07:54 -- setup/common.sh@19 -- # local var val 00:04:12.711 11:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.711 11:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.711 11:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.711 11:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.711 11:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.711 11:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.711 11:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8092460 kB' 'MemAvailable: 9468724 kB' 'Buffers: 2684 kB' 'Cached: 1589996 kB' 'SwapCached: 0 kB' 'Active: 456360 kB' 'Inactive: 1254364 kB' 'Active(anon): 128528 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119616 kB' 'Mapped: 50812 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155400 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93372 kB' 'KernelStack: 6504 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.711 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.711 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.712 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.712 11:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.712 11:07:54 -- setup/common.sh@33 -- # echo 0 00:04:12.712 11:07:54 -- setup/common.sh@33 -- # return 0 00:04:12.712 11:07:54 -- setup/hugepages.sh@97 -- # anon=0 00:04:12.712 11:07:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.712 11:07:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.712 11:07:54 -- setup/common.sh@18 -- # local node= 00:04:12.712 11:07:54 -- setup/common.sh@19 -- # local var val 00:04:12.712 11:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.712 11:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.712 11:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.712 11:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.976 11:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.976 11:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8092460 kB' 'MemAvailable: 9468724 kB' 'Buffers: 2684 kB' 'Cached: 1589996 kB' 'SwapCached: 0 kB' 'Active: 456292 kB' 'Inactive: 1254364 kB' 'Active(anon): 128460 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119512 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155428 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93400 kB' 'KernelStack: 6512 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.976 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.976 11:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.977 11:07:54 -- setup/common.sh@33 -- # echo 0 00:04:12.977 11:07:54 -- setup/common.sh@33 -- # return 0 00:04:12.977 11:07:54 -- setup/hugepages.sh@99 -- # surp=0 00:04:12.977 11:07:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.977 11:07:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.977 11:07:54 -- setup/common.sh@18 -- # local node= 00:04:12.977 11:07:54 -- setup/common.sh@19 -- # local var val 00:04:12.977 11:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.977 11:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.977 11:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.977 11:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.977 11:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.977 11:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8092460 kB' 'MemAvailable: 9468724 kB' 'Buffers: 2684 kB' 'Cached: 1589996 kB' 'SwapCached: 0 kB' 'Active: 456304 kB' 'Inactive: 1254364 kB' 'Active(anon): 128472 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119564 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155428 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93400 kB' 'KernelStack: 6496 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.977 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.977 11:07:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.978 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.978 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.979 11:07:54 -- setup/common.sh@33 -- # echo 0 00:04:12.979 11:07:54 -- setup/common.sh@33 -- # return 0 00:04:12.979 11:07:54 -- setup/hugepages.sh@100 -- # resv=0 00:04:12.979 nr_hugepages=1024 00:04:12.979 11:07:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.979 resv_hugepages=0 00:04:12.979 11:07:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.979 surplus_hugepages=0 00:04:12.979 11:07:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.979 anon_hugepages=0 00:04:12.979 11:07:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.979 11:07:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.979 11:07:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.979 11:07:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.979 11:07:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.979 11:07:54 -- setup/common.sh@18 -- # local node= 00:04:12.979 11:07:54 -- setup/common.sh@19 -- # local var val 00:04:12.979 11:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.979 11:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.979 11:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.979 11:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.979 11:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.979 11:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8092460 kB' 'MemAvailable: 9468724 kB' 'Buffers: 2684 kB' 'Cached: 1589996 kB' 'SwapCached: 0 kB' 'Active: 456308 kB' 'Inactive: 1254364 kB' 'Active(anon): 128476 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119536 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155424 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93396 kB' 'KernelStack: 6480 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.979 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.979 11:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.980 11:07:54 -- setup/common.sh@33 -- # echo 1024 00:04:12.980 11:07:54 -- setup/common.sh@33 -- # return 0 00:04:12.980 11:07:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.980 11:07:54 -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.980 11:07:54 -- setup/hugepages.sh@27 -- # local node 00:04:12.980 11:07:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.980 11:07:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:12.980 11:07:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.980 11:07:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.980 11:07:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.980 11:07:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.980 11:07:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.980 11:07:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.980 11:07:54 -- setup/common.sh@18 -- # local node=0 00:04:12.980 11:07:54 -- setup/common.sh@19 -- # local var val 00:04:12.980 11:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.980 11:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.980 11:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.980 11:07:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.980 11:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.980 11:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8092460 kB' 'MemUsed: 4146652 kB' 'SwapCached: 0 kB' 'Active: 456284 kB' 'Inactive: 1254364 kB' 'Active(anon): 128452 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1592680 kB' 'Mapped: 50708 kB' 'AnonPages: 119548 kB' 'Shmem: 10484 kB' 'KernelStack: 6512 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62028 kB' 'Slab: 155424 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.980 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.980 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # continue 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.981 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.981 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.981 11:07:54 -- setup/common.sh@33 -- # echo 0 00:04:12.981 11:07:54 -- setup/common.sh@33 -- # return 0 00:04:12.981 11:07:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.981 11:07:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.981 11:07:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.981 11:07:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.981 node0=1024 expecting 1024 00:04:12.981 11:07:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:12.981 11:07:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:12.981 11:07:54 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:12.981 11:07:54 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:12.981 11:07:54 -- setup/hugepages.sh@202 -- # setup output 00:04:12.981 11:07:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.981 11:07:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.279 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.279 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.279 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:13.279 11:07:54 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:13.279 11:07:54 -- setup/hugepages.sh@89 -- # local node 00:04:13.279 11:07:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.279 11:07:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.279 11:07:54 -- setup/hugepages.sh@92 -- # local surp 00:04:13.279 11:07:54 -- setup/hugepages.sh@93 -- # local resv 00:04:13.279 11:07:54 -- setup/hugepages.sh@94 -- # local anon 00:04:13.279 11:07:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.279 11:07:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.279 11:07:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.279 11:07:54 -- setup/common.sh@18 -- # local node= 00:04:13.279 11:07:54 -- setup/common.sh@19 -- # local var val 00:04:13.279 11:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.279 11:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.279 11:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.279 11:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.279 11:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.279 11:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.279 11:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8097120 kB' 'MemAvailable: 9473384 kB' 'Buffers: 2684 kB' 'Cached: 1589996 kB' 'SwapCached: 0 kB' 'Active: 456704 kB' 'Inactive: 1254364 kB' 'Active(anon): 128872 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120036 kB' 'Mapped: 50888 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155412 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93384 kB' 'KernelStack: 6504 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.279 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.279 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.280 11:07:54 -- setup/common.sh@33 -- # echo 0 00:04:13.280 11:07:54 -- setup/common.sh@33 -- # return 0 00:04:13.280 11:07:54 -- setup/hugepages.sh@97 -- # anon=0 00:04:13.280 11:07:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.280 11:07:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.280 11:07:54 -- setup/common.sh@18 -- # local node= 00:04:13.280 11:07:54 -- setup/common.sh@19 -- # local var val 00:04:13.280 11:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.280 11:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.280 11:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.280 11:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.280 11:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.280 11:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8097820 kB' 'MemAvailable: 9474084 kB' 'Buffers: 2684 kB' 'Cached: 1589996 kB' 'SwapCached: 0 kB' 'Active: 456400 kB' 'Inactive: 1254364 kB' 'Active(anon): 128568 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119680 kB' 'Mapped: 50836 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155412 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93384 kB' 'KernelStack: 6456 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.280 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.280 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.281 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.281 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.550 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.550 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.550 11:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.550 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.550 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.550 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.550 11:07:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.550 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.550 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.550 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.550 11:07:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.550 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.550 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.550 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.550 11:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.550 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.550 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.550 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.550 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.550 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.550 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.550 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.550 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.550 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.550 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.550 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.550 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.551 11:07:54 -- setup/common.sh@33 -- # echo 0 00:04:13.551 11:07:54 -- setup/common.sh@33 -- # return 0 00:04:13.551 11:07:54 -- setup/hugepages.sh@99 -- # surp=0 00:04:13.551 11:07:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.551 11:07:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.551 11:07:54 -- setup/common.sh@18 -- # local node= 00:04:13.551 11:07:54 -- setup/common.sh@19 -- # local var val 00:04:13.551 11:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.551 11:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.551 11:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.551 11:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.551 11:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.551 11:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8097820 kB' 'MemAvailable: 9474084 kB' 'Buffers: 2684 kB' 'Cached: 1589996 kB' 'SwapCached: 0 kB' 'Active: 456284 kB' 'Inactive: 1254364 kB' 'Active(anon): 128452 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119536 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155428 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93400 kB' 'KernelStack: 6512 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.551 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.551 11:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.552 11:07:54 -- setup/common.sh@33 -- # echo 0 00:04:13.552 11:07:54 -- setup/common.sh@33 -- # return 0 00:04:13.552 11:07:54 -- setup/hugepages.sh@100 -- # resv=0 00:04:13.552 nr_hugepages=1024 00:04:13.552 11:07:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.552 resv_hugepages=0 00:04:13.552 11:07:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.552 surplus_hugepages=0 00:04:13.552 11:07:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.552 anon_hugepages=0 00:04:13.552 11:07:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.552 11:07:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.552 11:07:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.552 11:07:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.552 11:07:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.552 11:07:54 -- setup/common.sh@18 -- # local node= 00:04:13.552 11:07:54 -- setup/common.sh@19 -- # local var val 00:04:13.552 11:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.552 11:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.552 11:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.552 11:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.552 11:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.552 11:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8097820 kB' 'MemAvailable: 9474084 kB' 'Buffers: 2684 kB' 'Cached: 1589996 kB' 'SwapCached: 0 kB' 'Active: 456320 kB' 'Inactive: 1254364 kB' 'Active(anon): 128488 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119580 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 155428 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93400 kB' 'KernelStack: 6512 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.552 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.552 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.553 11:07:54 -- setup/common.sh@33 -- # echo 1024 00:04:13.553 11:07:54 -- setup/common.sh@33 -- # return 0 00:04:13.553 11:07:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.553 11:07:54 -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.553 11:07:54 -- setup/hugepages.sh@27 -- # local node 00:04:13.553 11:07:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.553 11:07:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.553 11:07:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.553 11:07:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.553 11:07:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.553 11:07:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.553 11:07:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.553 11:07:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.553 11:07:54 -- setup/common.sh@18 -- # local node=0 00:04:13.553 11:07:54 -- setup/common.sh@19 -- # local var val 00:04:13.553 11:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.553 11:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.553 11:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.553 11:07:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.553 11:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.553 11:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8098392 kB' 'MemUsed: 4140720 kB' 'SwapCached: 0 kB' 'Active: 456364 kB' 'Inactive: 1254364 kB' 'Active(anon): 128532 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1254364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1592680 kB' 'Mapped: 50708 kB' 'AnonPages: 119656 kB' 'Shmem: 10484 kB' 'KernelStack: 6528 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62028 kB' 'Slab: 155428 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 93400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.553 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.553 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # continue 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.554 11:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.554 11:07:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.554 11:07:54 -- setup/common.sh@33 -- # echo 0 00:04:13.554 11:07:54 -- setup/common.sh@33 -- # return 0 00:04:13.554 11:07:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.554 11:07:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.554 11:07:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.554 11:07:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.554 node0=1024 expecting 1024 00:04:13.554 11:07:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.554 11:07:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.554 00:04:13.554 real 0m1.071s 00:04:13.554 user 0m0.569s 00:04:13.554 sys 0m0.567s 00:04:13.554 11:07:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.554 11:07:54 -- common/autotest_common.sh@10 -- # set +x 00:04:13.554 ************************************ 00:04:13.554 END TEST no_shrink_alloc 00:04:13.554 ************************************ 00:04:13.554 11:07:55 -- setup/hugepages.sh@217 -- # clear_hp 00:04:13.554 11:07:55 -- setup/hugepages.sh@37 -- # local node hp 00:04:13.554 11:07:55 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:13.554 11:07:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.554 11:07:55 -- setup/hugepages.sh@41 -- # echo 0 00:04:13.554 11:07:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.554 11:07:55 -- setup/hugepages.sh@41 -- # echo 0 00:04:13.554 11:07:55 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:13.554 11:07:55 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:13.554 00:04:13.554 real 0m4.690s 00:04:13.554 user 0m2.312s 00:04:13.554 sys 0m2.473s 00:04:13.554 11:07:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.554 ************************************ 00:04:13.554 END TEST hugepages 00:04:13.554 ************************************ 00:04:13.554 11:07:55 -- common/autotest_common.sh@10 -- # set +x 00:04:13.554 11:07:55 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:13.554 11:07:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:13.554 11:07:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:13.554 11:07:55 -- common/autotest_common.sh@10 -- # set +x 00:04:13.555 ************************************ 00:04:13.555 START TEST driver 00:04:13.555 ************************************ 00:04:13.555 11:07:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:13.555 * Looking for test storage... 00:04:13.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:13.814 11:07:55 -- setup/driver.sh@68 -- # setup reset 00:04:13.814 11:07:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.814 11:07:55 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.381 11:07:55 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:14.381 11:07:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.381 11:07:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.381 11:07:55 -- common/autotest_common.sh@10 -- # set +x 00:04:14.381 ************************************ 00:04:14.381 START TEST guess_driver 00:04:14.381 ************************************ 00:04:14.381 11:07:55 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:14.382 11:07:55 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:14.382 11:07:55 -- setup/driver.sh@47 -- # local fail=0 00:04:14.382 11:07:55 -- setup/driver.sh@49 -- # pick_driver 00:04:14.382 11:07:55 -- setup/driver.sh@36 -- # vfio 00:04:14.382 11:07:55 -- setup/driver.sh@21 -- # local iommu_grups 00:04:14.382 11:07:55 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:14.382 11:07:55 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:14.382 11:07:55 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:14.382 11:07:55 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:14.382 11:07:55 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:14.382 11:07:55 -- setup/driver.sh@32 -- # return 1 00:04:14.382 11:07:55 -- setup/driver.sh@38 -- # uio 00:04:14.382 11:07:55 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:14.382 11:07:55 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:14.382 11:07:55 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:14.382 11:07:55 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:14.382 11:07:55 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:14.382 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:14.382 11:07:55 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:14.382 11:07:55 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:14.382 11:07:55 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:14.382 Looking for driver=uio_pci_generic 00:04:14.382 11:07:55 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:14.382 11:07:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.382 11:07:55 -- setup/driver.sh@45 -- # setup output config 00:04:14.382 11:07:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.382 11:07:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.949 11:07:56 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:14.949 11:07:56 -- setup/driver.sh@58 -- # continue 00:04:14.949 11:07:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.949 11:07:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.949 11:07:56 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:14.949 11:07:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.949 11:07:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.949 11:07:56 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:14.949 11:07:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.949 11:07:56 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:14.949 11:07:56 -- setup/driver.sh@65 -- # setup reset 00:04:14.949 11:07:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.949 11:07:56 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.516 00:04:15.516 real 0m1.378s 00:04:15.516 user 0m0.538s 00:04:15.516 sys 0m0.837s 00:04:15.516 11:07:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.517 ************************************ 00:04:15.517 END TEST guess_driver 00:04:15.517 ************************************ 00:04:15.517 11:07:57 -- common/autotest_common.sh@10 -- # set +x 00:04:15.776 00:04:15.776 real 0m2.044s 00:04:15.776 user 0m0.773s 00:04:15.776 sys 0m1.331s 00:04:15.776 11:07:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.776 ************************************ 00:04:15.776 END TEST driver 00:04:15.776 ************************************ 00:04:15.776 11:07:57 -- common/autotest_common.sh@10 -- # set +x 00:04:15.776 11:07:57 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:15.776 11:07:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:15.776 11:07:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:15.776 11:07:57 -- common/autotest_common.sh@10 -- # set +x 00:04:15.776 ************************************ 00:04:15.776 START TEST devices 00:04:15.776 ************************************ 00:04:15.776 11:07:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:15.776 * Looking for test storage... 00:04:15.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:15.776 11:07:57 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:15.776 11:07:57 -- setup/devices.sh@192 -- # setup reset 00:04:15.776 11:07:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.776 11:07:57 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:16.711 11:07:57 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:16.711 11:07:57 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:16.711 11:07:57 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:16.711 11:07:57 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:16.711 11:07:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.711 11:07:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:16.711 11:07:57 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:16.711 11:07:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.711 11:07:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.711 11:07:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.711 11:07:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:16.711 11:07:57 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:16.711 11:07:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:16.711 11:07:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.711 11:07:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.711 11:07:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:16.711 11:07:57 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:16.711 11:07:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:16.711 11:07:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.711 11:07:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.711 11:07:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:16.711 11:07:57 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:16.711 11:07:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:16.711 11:07:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.711 11:07:57 -- setup/devices.sh@196 -- # blocks=() 00:04:16.711 11:07:57 -- setup/devices.sh@196 -- # declare -a blocks 00:04:16.711 11:07:57 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:16.711 11:07:57 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:16.711 11:07:57 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:16.711 11:07:57 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:16.711 11:07:57 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:16.711 11:07:57 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:16.711 11:07:57 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:16.711 11:07:57 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:16.711 11:07:57 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:16.711 11:07:57 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:16.711 11:07:57 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:16.711 No valid GPT data, bailing 00:04:16.711 11:07:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.711 11:07:58 -- scripts/common.sh@393 -- # pt= 00:04:16.711 11:07:58 -- scripts/common.sh@394 -- # return 1 00:04:16.711 11:07:58 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:16.711 11:07:58 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:16.711 11:07:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:16.711 11:07:58 -- setup/common.sh@80 -- # echo 5368709120 00:04:16.711 11:07:58 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:16.711 11:07:58 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:16.711 11:07:58 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:16.711 11:07:58 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:16.711 11:07:58 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:16.711 11:07:58 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:16.711 11:07:58 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:16.711 11:07:58 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:16.711 11:07:58 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:16.711 11:07:58 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:16.711 11:07:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:16.711 No valid GPT data, bailing 00:04:16.711 11:07:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:16.711 11:07:58 -- scripts/common.sh@393 -- # pt= 00:04:16.711 11:07:58 -- scripts/common.sh@394 -- # return 1 00:04:16.711 11:07:58 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:16.711 11:07:58 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:16.711 11:07:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:16.711 11:07:58 -- setup/common.sh@80 -- # echo 4294967296 00:04:16.711 11:07:58 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:16.711 11:07:58 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:16.711 11:07:58 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:16.711 11:07:58 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:16.711 11:07:58 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:16.711 11:07:58 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:16.711 11:07:58 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:16.711 11:07:58 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:16.711 11:07:58 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:16.711 11:07:58 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:16.711 11:07:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:16.711 No valid GPT data, bailing 00:04:16.711 11:07:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:16.711 11:07:58 -- scripts/common.sh@393 -- # pt= 00:04:16.711 11:07:58 -- scripts/common.sh@394 -- # return 1 00:04:16.711 11:07:58 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:16.711 11:07:58 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:16.711 11:07:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:16.711 11:07:58 -- setup/common.sh@80 -- # echo 4294967296 00:04:16.711 11:07:58 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:16.711 11:07:58 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:16.711 11:07:58 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:16.711 11:07:58 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:16.711 11:07:58 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:16.711 11:07:58 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:16.711 11:07:58 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:16.711 11:07:58 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:16.711 11:07:58 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:16.711 11:07:58 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:16.711 11:07:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:16.711 No valid GPT data, bailing 00:04:16.711 11:07:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:16.711 11:07:58 -- scripts/common.sh@393 -- # pt= 00:04:16.711 11:07:58 -- scripts/common.sh@394 -- # return 1 00:04:16.711 11:07:58 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:16.711 11:07:58 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:16.711 11:07:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:16.711 11:07:58 -- setup/common.sh@80 -- # echo 4294967296 00:04:16.711 11:07:58 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:16.711 11:07:58 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:16.711 11:07:58 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:16.711 11:07:58 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:16.711 11:07:58 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:16.711 11:07:58 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:16.711 11:07:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:16.711 11:07:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.711 11:07:58 -- common/autotest_common.sh@10 -- # set +x 00:04:16.711 ************************************ 00:04:16.711 START TEST nvme_mount 00:04:16.711 ************************************ 00:04:16.711 11:07:58 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:16.711 11:07:58 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:16.711 11:07:58 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:16.711 11:07:58 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.711 11:07:58 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:16.711 11:07:58 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:16.711 11:07:58 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:16.711 11:07:58 -- setup/common.sh@40 -- # local part_no=1 00:04:16.711 11:07:58 -- setup/common.sh@41 -- # local size=1073741824 00:04:16.711 11:07:58 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:16.711 11:07:58 -- setup/common.sh@44 -- # parts=() 00:04:16.711 11:07:58 -- setup/common.sh@44 -- # local parts 00:04:16.711 11:07:58 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:16.711 11:07:58 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.711 11:07:58 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.711 11:07:58 -- setup/common.sh@46 -- # (( part++ )) 00:04:16.711 11:07:58 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.711 11:07:58 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:16.711 11:07:58 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:16.711 11:07:58 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:18.087 Creating new GPT entries in memory. 00:04:18.087 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:18.087 other utilities. 00:04:18.087 11:07:59 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:18.087 11:07:59 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.087 11:07:59 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.087 11:07:59 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.087 11:07:59 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:19.022 Creating new GPT entries in memory. 00:04:19.022 The operation has completed successfully. 00:04:19.022 11:08:00 -- setup/common.sh@57 -- # (( part++ )) 00:04:19.022 11:08:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.022 11:08:00 -- setup/common.sh@62 -- # wait 52153 00:04:19.022 11:08:00 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.022 11:08:00 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:19.022 11:08:00 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.022 11:08:00 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:19.022 11:08:00 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:19.022 11:08:00 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.022 11:08:00 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:19.022 11:08:00 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:19.022 11:08:00 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:19.022 11:08:00 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.022 11:08:00 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:19.022 11:08:00 -- setup/devices.sh@53 -- # local found=0 00:04:19.022 11:08:00 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.022 11:08:00 -- setup/devices.sh@56 -- # : 00:04:19.022 11:08:00 -- setup/devices.sh@59 -- # local pci status 00:04:19.022 11:08:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.022 11:08:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:19.022 11:08:00 -- setup/devices.sh@47 -- # setup output config 00:04:19.022 11:08:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.022 11:08:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.022 11:08:00 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.022 11:08:00 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:19.022 11:08:00 -- setup/devices.sh@63 -- # found=1 00:04:19.022 11:08:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.022 11:08:00 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.022 11:08:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.589 11:08:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.589 11:08:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.589 11:08:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.589 11:08:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.589 11:08:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.589 11:08:01 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:19.589 11:08:01 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.589 11:08:01 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.589 11:08:01 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:19.589 11:08:01 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:19.589 11:08:01 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.589 11:08:01 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.589 11:08:01 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:19.589 11:08:01 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:19.589 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:19.589 11:08:01 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:19.589 11:08:01 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:19.848 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:19.848 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:19.848 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:19.848 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:19.848 11:08:01 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:19.848 11:08:01 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:19.848 11:08:01 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.848 11:08:01 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:19.848 11:08:01 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:19.848 11:08:01 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.848 11:08:01 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:19.848 11:08:01 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:19.848 11:08:01 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:19.848 11:08:01 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.848 11:08:01 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:19.848 11:08:01 -- setup/devices.sh@53 -- # local found=0 00:04:19.848 11:08:01 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.848 11:08:01 -- setup/devices.sh@56 -- # : 00:04:19.848 11:08:01 -- setup/devices.sh@59 -- # local pci status 00:04:19.848 11:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.848 11:08:01 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:19.848 11:08:01 -- setup/devices.sh@47 -- # setup output config 00:04:19.848 11:08:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.848 11:08:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:20.107 11:08:01 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:20.107 11:08:01 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:20.107 11:08:01 -- setup/devices.sh@63 -- # found=1 00:04:20.107 11:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.107 11:08:01 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:20.107 11:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.365 11:08:01 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:20.365 11:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.365 11:08:01 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:20.365 11:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.624 11:08:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.624 11:08:01 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:20.624 11:08:01 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.624 11:08:01 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.624 11:08:01 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:20.624 11:08:01 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.624 11:08:02 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:20.624 11:08:02 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:20.624 11:08:02 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:20.624 11:08:02 -- setup/devices.sh@50 -- # local mount_point= 00:04:20.624 11:08:02 -- setup/devices.sh@51 -- # local test_file= 00:04:20.624 11:08:02 -- setup/devices.sh@53 -- # local found=0 00:04:20.624 11:08:02 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:20.624 11:08:02 -- setup/devices.sh@59 -- # local pci status 00:04:20.624 11:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.624 11:08:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:20.624 11:08:02 -- setup/devices.sh@47 -- # setup output config 00:04:20.624 11:08:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.624 11:08:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:20.883 11:08:02 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:20.883 11:08:02 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:20.883 11:08:02 -- setup/devices.sh@63 -- # found=1 00:04:20.883 11:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.883 11:08:02 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:20.883 11:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.142 11:08:02 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:21.142 11:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.142 11:08:02 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:21.142 11:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.142 11:08:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.142 11:08:02 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:21.142 11:08:02 -- setup/devices.sh@68 -- # return 0 00:04:21.142 11:08:02 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:21.142 11:08:02 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.142 11:08:02 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.142 11:08:02 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.142 11:08:02 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:21.142 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:21.142 00:04:21.142 real 0m4.438s 00:04:21.142 user 0m0.973s 00:04:21.142 sys 0m1.164s 00:04:21.142 11:08:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.142 ************************************ 00:04:21.142 END TEST nvme_mount 00:04:21.142 ************************************ 00:04:21.142 11:08:02 -- common/autotest_common.sh@10 -- # set +x 00:04:21.401 11:08:02 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:21.401 11:08:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.401 11:08:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.401 11:08:02 -- common/autotest_common.sh@10 -- # set +x 00:04:21.401 ************************************ 00:04:21.401 START TEST dm_mount 00:04:21.401 ************************************ 00:04:21.401 11:08:02 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:21.401 11:08:02 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:21.401 11:08:02 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:21.401 11:08:02 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:21.401 11:08:02 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:21.401 11:08:02 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:21.401 11:08:02 -- setup/common.sh@40 -- # local part_no=2 00:04:21.401 11:08:02 -- setup/common.sh@41 -- # local size=1073741824 00:04:21.401 11:08:02 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:21.401 11:08:02 -- setup/common.sh@44 -- # parts=() 00:04:21.401 11:08:02 -- setup/common.sh@44 -- # local parts 00:04:21.401 11:08:02 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:21.401 11:08:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.401 11:08:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:21.401 11:08:02 -- setup/common.sh@46 -- # (( part++ )) 00:04:21.401 11:08:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.401 11:08:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:21.401 11:08:02 -- setup/common.sh@46 -- # (( part++ )) 00:04:21.401 11:08:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.401 11:08:02 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:21.401 11:08:02 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:21.401 11:08:02 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:22.336 Creating new GPT entries in memory. 00:04:22.336 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:22.336 other utilities. 00:04:22.336 11:08:03 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:22.336 11:08:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.336 11:08:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.336 11:08:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.336 11:08:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:23.273 Creating new GPT entries in memory. 00:04:23.273 The operation has completed successfully. 00:04:23.273 11:08:04 -- setup/common.sh@57 -- # (( part++ )) 00:04:23.273 11:08:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.273 11:08:04 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:23.273 11:08:04 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:23.273 11:08:04 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:24.650 The operation has completed successfully. 00:04:24.650 11:08:05 -- setup/common.sh@57 -- # (( part++ )) 00:04:24.650 11:08:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.650 11:08:05 -- setup/common.sh@62 -- # wait 52608 00:04:24.650 11:08:05 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:24.650 11:08:05 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:24.650 11:08:05 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:24.650 11:08:05 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:24.650 11:08:05 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:24.650 11:08:05 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:24.650 11:08:05 -- setup/devices.sh@161 -- # break 00:04:24.650 11:08:05 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:24.650 11:08:05 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:24.650 11:08:05 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:24.650 11:08:05 -- setup/devices.sh@166 -- # dm=dm-0 00:04:24.650 11:08:05 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:24.650 11:08:05 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:24.650 11:08:05 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:24.650 11:08:05 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:24.650 11:08:05 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:24.650 11:08:05 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:24.650 11:08:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:24.650 11:08:05 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:24.650 11:08:05 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:24.650 11:08:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:24.650 11:08:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:24.650 11:08:05 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:24.650 11:08:05 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:24.650 11:08:05 -- setup/devices.sh@53 -- # local found=0 00:04:24.650 11:08:05 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:24.650 11:08:05 -- setup/devices.sh@56 -- # : 00:04:24.650 11:08:05 -- setup/devices.sh@59 -- # local pci status 00:04:24.650 11:08:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.650 11:08:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:24.650 11:08:05 -- setup/devices.sh@47 -- # setup output config 00:04:24.650 11:08:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.650 11:08:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:24.650 11:08:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:24.650 11:08:06 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:24.650 11:08:06 -- setup/devices.sh@63 -- # found=1 00:04:24.650 11:08:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.650 11:08:06 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:24.650 11:08:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.909 11:08:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:24.909 11:08:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.167 11:08:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:25.167 11:08:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.167 11:08:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.167 11:08:06 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:25.167 11:08:06 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.167 11:08:06 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:25.167 11:08:06 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:25.167 11:08:06 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.167 11:08:06 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:25.167 11:08:06 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:25.167 11:08:06 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:25.167 11:08:06 -- setup/devices.sh@50 -- # local mount_point= 00:04:25.167 11:08:06 -- setup/devices.sh@51 -- # local test_file= 00:04:25.167 11:08:06 -- setup/devices.sh@53 -- # local found=0 00:04:25.167 11:08:06 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:25.167 11:08:06 -- setup/devices.sh@59 -- # local pci status 00:04:25.167 11:08:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.167 11:08:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:25.167 11:08:06 -- setup/devices.sh@47 -- # setup output config 00:04:25.167 11:08:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.167 11:08:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:25.426 11:08:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:25.426 11:08:06 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:25.426 11:08:06 -- setup/devices.sh@63 -- # found=1 00:04:25.426 11:08:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.426 11:08:06 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:25.426 11:08:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.685 11:08:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:25.685 11:08:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.685 11:08:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:25.685 11:08:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.685 11:08:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.685 11:08:07 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:25.685 11:08:07 -- setup/devices.sh@68 -- # return 0 00:04:25.685 11:08:07 -- setup/devices.sh@187 -- # cleanup_dm 00:04:25.685 11:08:07 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.685 11:08:07 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:25.685 11:08:07 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:25.685 11:08:07 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.685 11:08:07 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:25.943 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.943 11:08:07 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:25.943 11:08:07 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:25.943 00:04:25.943 real 0m4.537s 00:04:25.943 user 0m0.655s 00:04:25.943 sys 0m0.806s 00:04:25.943 11:08:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.943 11:08:07 -- common/autotest_common.sh@10 -- # set +x 00:04:25.943 ************************************ 00:04:25.943 END TEST dm_mount 00:04:25.943 ************************************ 00:04:25.943 11:08:07 -- setup/devices.sh@1 -- # cleanup 00:04:25.943 11:08:07 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:25.943 11:08:07 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.943 11:08:07 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.943 11:08:07 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:25.943 11:08:07 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.943 11:08:07 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.211 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:26.211 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:26.211 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:26.211 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:26.211 11:08:07 -- setup/devices.sh@12 -- # cleanup_dm 00:04:26.211 11:08:07 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:26.211 11:08:07 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:26.211 11:08:07 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.211 11:08:07 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:26.211 11:08:07 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.211 11:08:07 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:26.211 00:04:26.211 real 0m10.468s 00:04:26.211 user 0m2.283s 00:04:26.211 sys 0m2.523s 00:04:26.211 11:08:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.211 11:08:07 -- common/autotest_common.sh@10 -- # set +x 00:04:26.211 ************************************ 00:04:26.211 END TEST devices 00:04:26.211 ************************************ 00:04:26.211 ************************************ 00:04:26.211 END TEST setup.sh 00:04:26.211 ************************************ 00:04:26.211 00:04:26.211 real 0m21.645s 00:04:26.211 user 0m7.270s 00:04:26.211 sys 0m8.805s 00:04:26.211 11:08:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.211 11:08:07 -- common/autotest_common.sh@10 -- # set +x 00:04:26.211 11:08:07 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:26.543 Hugepages 00:04:26.543 node hugesize free / total 00:04:26.543 node0 1048576kB 0 / 0 00:04:26.543 node0 2048kB 2048 / 2048 00:04:26.543 00:04:26.543 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:26.543 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:26.543 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:26.543 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:26.543 11:08:08 -- spdk/autotest.sh@141 -- # uname -s 00:04:26.543 11:08:08 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:26.543 11:08:08 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:26.543 11:08:08 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.368 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.368 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.368 11:08:08 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:28.744 11:08:09 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:28.744 11:08:09 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:28.744 11:08:09 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:28.744 11:08:09 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:28.744 11:08:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:28.744 11:08:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:28.744 11:08:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.744 11:08:09 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:28.744 11:08:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:28.744 11:08:10 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:28.745 11:08:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:28.745 11:08:10 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.745 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.003 Waiting for block devices as requested 00:04:29.003 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.003 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.003 11:08:10 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:29.003 11:08:10 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:29.003 11:08:10 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:04:29.003 11:08:10 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:29.003 11:08:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:29.003 11:08:10 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:29.003 11:08:10 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:29.003 11:08:10 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:29.003 11:08:10 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:29.003 11:08:10 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:29.003 11:08:10 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:29.003 11:08:10 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:29.003 11:08:10 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:29.003 11:08:10 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:29.003 11:08:10 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:29.003 11:08:10 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:29.003 11:08:10 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:29.003 11:08:10 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:29.003 11:08:10 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:29.003 11:08:10 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:29.003 11:08:10 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:29.003 11:08:10 -- common/autotest_common.sh@1542 -- # continue 00:04:29.004 11:08:10 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:29.004 11:08:10 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:29.004 11:08:10 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:04:29.004 11:08:10 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:29.004 11:08:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:29.004 11:08:10 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:04:29.004 11:08:10 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:29.004 11:08:10 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:29.004 11:08:10 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:04:29.004 11:08:10 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:04:29.262 11:08:10 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:04:29.262 11:08:10 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:29.262 11:08:10 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:29.262 11:08:10 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:29.262 11:08:10 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:29.262 11:08:10 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:29.262 11:08:10 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:04:29.262 11:08:10 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:29.262 11:08:10 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:29.262 11:08:10 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:29.262 11:08:10 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:29.262 11:08:10 -- common/autotest_common.sh@1542 -- # continue 00:04:29.262 11:08:10 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:29.262 11:08:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:29.262 11:08:10 -- common/autotest_common.sh@10 -- # set +x 00:04:29.262 11:08:10 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:29.262 11:08:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:29.262 11:08:10 -- common/autotest_common.sh@10 -- # set +x 00:04:29.262 11:08:10 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.829 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.829 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.088 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.088 11:08:11 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:30.088 11:08:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:30.088 11:08:11 -- common/autotest_common.sh@10 -- # set +x 00:04:30.088 11:08:11 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:30.088 11:08:11 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:30.088 11:08:11 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:30.088 11:08:11 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:30.088 11:08:11 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:30.088 11:08:11 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:30.088 11:08:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:30.088 11:08:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:30.088 11:08:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:30.088 11:08:11 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:30.088 11:08:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:30.088 11:08:11 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:30.088 11:08:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:30.088 11:08:11 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:30.088 11:08:11 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:30.088 11:08:11 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:30.088 11:08:11 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:30.088 11:08:11 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:30.088 11:08:11 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:30.088 11:08:11 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:30.088 11:08:11 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:30.088 11:08:11 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:30.088 11:08:11 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:30.088 11:08:11 -- common/autotest_common.sh@1578 -- # return 0 00:04:30.088 11:08:11 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:30.088 11:08:11 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:30.088 11:08:11 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:30.088 11:08:11 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:30.088 11:08:11 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:30.088 11:08:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:30.088 11:08:11 -- common/autotest_common.sh@10 -- # set +x 00:04:30.088 11:08:11 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:30.088 11:08:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.088 11:08:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.088 11:08:11 -- common/autotest_common.sh@10 -- # set +x 00:04:30.088 ************************************ 00:04:30.088 START TEST env 00:04:30.088 ************************************ 00:04:30.088 11:08:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:30.346 * Looking for test storage... 00:04:30.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:30.346 11:08:11 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:30.346 11:08:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.346 11:08:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.346 11:08:11 -- common/autotest_common.sh@10 -- # set +x 00:04:30.346 ************************************ 00:04:30.346 START TEST env_memory 00:04:30.346 ************************************ 00:04:30.347 11:08:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:30.347 00:04:30.347 00:04:30.347 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.347 http://cunit.sourceforge.net/ 00:04:30.347 00:04:30.347 00:04:30.347 Suite: memory 00:04:30.347 Test: alloc and free memory map ...[2024-10-13 11:08:11.788703] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:30.347 passed 00:04:30.347 Test: mem map translation ...[2024-10-13 11:08:11.819620] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:30.347 [2024-10-13 11:08:11.819668] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:30.347 [2024-10-13 11:08:11.819734] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:30.347 [2024-10-13 11:08:11.819744] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:30.347 passed 00:04:30.347 Test: mem map registration ...[2024-10-13 11:08:11.883515] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:30.347 [2024-10-13 11:08:11.883555] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:30.347 passed 00:04:30.606 Test: mem map adjacent registrations ...passed 00:04:30.606 00:04:30.606 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.606 suites 1 1 n/a 0 0 00:04:30.606 tests 4 4 4 0 0 00:04:30.606 asserts 152 152 152 0 n/a 00:04:30.606 00:04:30.606 Elapsed time = 0.213 seconds 00:04:30.606 00:04:30.606 real 0m0.230s 00:04:30.606 user 0m0.214s 00:04:30.606 sys 0m0.013s 00:04:30.606 11:08:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.606 11:08:11 -- common/autotest_common.sh@10 -- # set +x 00:04:30.606 ************************************ 00:04:30.606 END TEST env_memory 00:04:30.606 ************************************ 00:04:30.606 11:08:12 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:30.606 11:08:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.606 11:08:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.606 11:08:12 -- common/autotest_common.sh@10 -- # set +x 00:04:30.606 ************************************ 00:04:30.606 START TEST env_vtophys 00:04:30.606 ************************************ 00:04:30.606 11:08:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:30.606 EAL: lib.eal log level changed from notice to debug 00:04:30.606 EAL: Detected lcore 0 as core 0 on socket 0 00:04:30.606 EAL: Detected lcore 1 as core 0 on socket 0 00:04:30.606 EAL: Detected lcore 2 as core 0 on socket 0 00:04:30.606 EAL: Detected lcore 3 as core 0 on socket 0 00:04:30.606 EAL: Detected lcore 4 as core 0 on socket 0 00:04:30.606 EAL: Detected lcore 5 as core 0 on socket 0 00:04:30.606 EAL: Detected lcore 6 as core 0 on socket 0 00:04:30.606 EAL: Detected lcore 7 as core 0 on socket 0 00:04:30.606 EAL: Detected lcore 8 as core 0 on socket 0 00:04:30.606 EAL: Detected lcore 9 as core 0 on socket 0 00:04:30.606 EAL: Maximum logical cores by configuration: 128 00:04:30.606 EAL: Detected CPU lcores: 10 00:04:30.606 EAL: Detected NUMA nodes: 1 00:04:30.606 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:30.606 EAL: Detected shared linkage of DPDK 00:04:30.606 EAL: No shared files mode enabled, IPC will be disabled 00:04:30.606 EAL: Selected IOVA mode 'PA' 00:04:30.606 EAL: Probing VFIO support... 00:04:30.606 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.606 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:30.606 EAL: Ask a virtual area of 0x2e000 bytes 00:04:30.606 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:30.606 EAL: Setting up physically contiguous memory... 00:04:30.606 EAL: Setting maximum number of open files to 524288 00:04:30.606 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:30.606 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:30.606 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.606 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:30.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.606 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.606 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:30.606 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:30.606 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.606 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:30.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.606 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.606 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:30.606 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:30.606 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.606 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:30.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.606 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.606 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:30.606 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:30.606 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.606 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:30.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.606 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.606 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:30.606 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:30.606 EAL: Hugepages will be freed exactly as allocated. 00:04:30.606 EAL: No shared files mode enabled, IPC is disabled 00:04:30.606 EAL: No shared files mode enabled, IPC is disabled 00:04:30.606 EAL: TSC frequency is ~2200000 KHz 00:04:30.606 EAL: Main lcore 0 is ready (tid=7f8a68837a00;cpuset=[0]) 00:04:30.606 EAL: Trying to obtain current memory policy. 00:04:30.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.606 EAL: Restoring previous memory policy: 0 00:04:30.606 EAL: request: mp_malloc_sync 00:04:30.606 EAL: No shared files mode enabled, IPC is disabled 00:04:30.606 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.606 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.606 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.606 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.606 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:30.606 00:04:30.606 00:04:30.606 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.606 http://cunit.sourceforge.net/ 00:04:30.606 00:04:30.606 00:04:30.606 Suite: components_suite 00:04:30.606 Test: vtophys_malloc_test ...passed 00:04:30.606 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:30.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.606 EAL: Restoring previous memory policy: 4 00:04:30.606 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.606 EAL: request: mp_malloc_sync 00:04:30.606 EAL: No shared files mode enabled, IPC is disabled 00:04:30.606 EAL: Heap on socket 0 was expanded by 4MB 00:04:30.606 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.606 EAL: request: mp_malloc_sync 00:04:30.606 EAL: No shared files mode enabled, IPC is disabled 00:04:30.606 EAL: Heap on socket 0 was shrunk by 4MB 00:04:30.606 EAL: Trying to obtain current memory policy. 00:04:30.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.606 EAL: Restoring previous memory policy: 4 00:04:30.606 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.606 EAL: request: mp_malloc_sync 00:04:30.606 EAL: No shared files mode enabled, IPC is disabled 00:04:30.606 EAL: Heap on socket 0 was expanded by 6MB 00:04:30.606 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.606 EAL: request: mp_malloc_sync 00:04:30.606 EAL: No shared files mode enabled, IPC is disabled 00:04:30.606 EAL: Heap on socket 0 was shrunk by 6MB 00:04:30.606 EAL: Trying to obtain current memory policy. 00:04:30.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.606 EAL: Restoring previous memory policy: 4 00:04:30.606 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.606 EAL: request: mp_malloc_sync 00:04:30.606 EAL: No shared files mode enabled, IPC is disabled 00:04:30.606 EAL: Heap on socket 0 was expanded by 10MB 00:04:30.606 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.606 EAL: request: mp_malloc_sync 00:04:30.606 EAL: No shared files mode enabled, IPC is disabled 00:04:30.606 EAL: Heap on socket 0 was shrunk by 10MB 00:04:30.606 EAL: Trying to obtain current memory policy. 00:04:30.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.606 EAL: Restoring previous memory policy: 4 00:04:30.606 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.606 EAL: request: mp_malloc_sync 00:04:30.606 EAL: No shared files mode enabled, IPC is disabled 00:04:30.606 EAL: Heap on socket 0 was expanded by 18MB 00:04:30.606 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.606 EAL: request: mp_malloc_sync 00:04:30.606 EAL: No shared files mode enabled, IPC is disabled 00:04:30.606 EAL: Heap on socket 0 was shrunk by 18MB 00:04:30.606 EAL: Trying to obtain current memory policy. 00:04:30.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.606 EAL: Restoring previous memory policy: 4 00:04:30.606 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.607 EAL: request: mp_malloc_sync 00:04:30.607 EAL: No shared files mode enabled, IPC is disabled 00:04:30.607 EAL: Heap on socket 0 was expanded by 34MB 00:04:30.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.607 EAL: request: mp_malloc_sync 00:04:30.607 EAL: No shared files mode enabled, IPC is disabled 00:04:30.607 EAL: Heap on socket 0 was shrunk by 34MB 00:04:30.607 EAL: Trying to obtain current memory policy. 00:04:30.607 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.607 EAL: Restoring previous memory policy: 4 00:04:30.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.607 EAL: request: mp_malloc_sync 00:04:30.607 EAL: No shared files mode enabled, IPC is disabled 00:04:30.607 EAL: Heap on socket 0 was expanded by 66MB 00:04:30.866 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.866 EAL: request: mp_malloc_sync 00:04:30.866 EAL: No shared files mode enabled, IPC is disabled 00:04:30.866 EAL: Heap on socket 0 was shrunk by 66MB 00:04:30.866 EAL: Trying to obtain current memory policy. 00:04:30.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.866 EAL: Restoring previous memory policy: 4 00:04:30.866 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.866 EAL: request: mp_malloc_sync 00:04:30.866 EAL: No shared files mode enabled, IPC is disabled 00:04:30.866 EAL: Heap on socket 0 was expanded by 130MB 00:04:30.866 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.866 EAL: request: mp_malloc_sync 00:04:30.866 EAL: No shared files mode enabled, IPC is disabled 00:04:30.866 EAL: Heap on socket 0 was shrunk by 130MB 00:04:30.866 EAL: Trying to obtain current memory policy. 00:04:30.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.866 EAL: Restoring previous memory policy: 4 00:04:30.866 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.866 EAL: request: mp_malloc_sync 00:04:30.866 EAL: No shared files mode enabled, IPC is disabled 00:04:30.866 EAL: Heap on socket 0 was expanded by 258MB 00:04:30.866 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.866 EAL: request: mp_malloc_sync 00:04:30.866 EAL: No shared files mode enabled, IPC is disabled 00:04:30.866 EAL: Heap on socket 0 was shrunk by 258MB 00:04:30.866 EAL: Trying to obtain current memory policy. 00:04:30.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.866 EAL: Restoring previous memory policy: 4 00:04:30.866 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.866 EAL: request: mp_malloc_sync 00:04:30.866 EAL: No shared files mode enabled, IPC is disabled 00:04:30.866 EAL: Heap on socket 0 was expanded by 514MB 00:04:31.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.126 EAL: request: mp_malloc_sync 00:04:31.126 EAL: No shared files mode enabled, IPC is disabled 00:04:31.126 EAL: Heap on socket 0 was shrunk by 514MB 00:04:31.126 EAL: Trying to obtain current memory policy. 00:04:31.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.126 EAL: Restoring previous memory policy: 4 00:04:31.126 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.126 EAL: request: mp_malloc_sync 00:04:31.126 EAL: No shared files mode enabled, IPC is disabled 00:04:31.126 EAL: Heap on socket 0 was expanded by 1026MB 00:04:31.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.384 passed 00:04:31.384 00:04:31.384 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.384 suites 1 1 n/a 0 0 00:04:31.384 tests 2 2 2 0 0 00:04:31.384 asserts 5218 5218 5218 0 n/a 00:04:31.384 00:04:31.384 Elapsed time = 0.671 seconds 00:04:31.384 EAL: request: mp_malloc_sync 00:04:31.384 EAL: No shared files mode enabled, IPC is disabled 00:04:31.384 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.384 EAL: request: mp_malloc_sync 00:04:31.384 EAL: No shared files mode enabled, IPC is disabled 00:04:31.384 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.384 EAL: No shared files mode enabled, IPC is disabled 00:04:31.384 EAL: No shared files mode enabled, IPC is disabled 00:04:31.384 EAL: No shared files mode enabled, IPC is disabled 00:04:31.384 00:04:31.384 real 0m0.855s 00:04:31.384 user 0m0.437s 00:04:31.384 sys 0m0.291s 00:04:31.384 11:08:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.384 11:08:12 -- common/autotest_common.sh@10 -- # set +x 00:04:31.384 ************************************ 00:04:31.384 END TEST env_vtophys 00:04:31.384 ************************************ 00:04:31.384 11:08:12 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.384 11:08:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.384 11:08:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.384 11:08:12 -- common/autotest_common.sh@10 -- # set +x 00:04:31.384 ************************************ 00:04:31.384 START TEST env_pci 00:04:31.384 ************************************ 00:04:31.384 11:08:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.384 00:04:31.384 00:04:31.384 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.384 http://cunit.sourceforge.net/ 00:04:31.384 00:04:31.384 00:04:31.384 Suite: pci 00:04:31.384 Test: pci_hook ...[2024-10-13 11:08:12.947243] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 53734 has claimed it 00:04:31.384 passed 00:04:31.384 00:04:31.384 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.384 suites 1 1 n/a 0 0 00:04:31.384 tests 1 1 1 0 0 00:04:31.385 asserts 25 25 25 0 n/a 00:04:31.385 00:04:31.385 Elapsed time = 0.002 seconds 00:04:31.385 EAL: Cannot find device (10000:00:01.0) 00:04:31.385 EAL: Failed to attach device on primary process 00:04:31.385 00:04:31.385 real 0m0.023s 00:04:31.385 user 0m0.015s 00:04:31.385 sys 0m0.008s 00:04:31.385 11:08:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.385 11:08:12 -- common/autotest_common.sh@10 -- # set +x 00:04:31.385 ************************************ 00:04:31.385 END TEST env_pci 00:04:31.385 ************************************ 00:04:31.643 11:08:12 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.643 11:08:12 -- env/env.sh@15 -- # uname 00:04:31.643 11:08:12 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.643 11:08:12 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.643 11:08:12 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.643 11:08:12 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:31.643 11:08:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.643 11:08:12 -- common/autotest_common.sh@10 -- # set +x 00:04:31.643 ************************************ 00:04:31.643 START TEST env_dpdk_post_init 00:04:31.643 ************************************ 00:04:31.643 11:08:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.643 EAL: Detected CPU lcores: 10 00:04:31.643 EAL: Detected NUMA nodes: 1 00:04:31.643 EAL: Detected shared linkage of DPDK 00:04:31.643 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.643 EAL: Selected IOVA mode 'PA' 00:04:31.643 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.643 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:31.643 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:31.643 Starting DPDK initialization... 00:04:31.643 Starting SPDK post initialization... 00:04:31.643 SPDK NVMe probe 00:04:31.643 Attaching to 0000:00:06.0 00:04:31.643 Attaching to 0000:00:07.0 00:04:31.643 Attached to 0000:00:06.0 00:04:31.643 Attached to 0000:00:07.0 00:04:31.643 Cleaning up... 00:04:31.643 ************************************ 00:04:31.643 END TEST env_dpdk_post_init 00:04:31.643 ************************************ 00:04:31.643 00:04:31.643 real 0m0.177s 00:04:31.643 user 0m0.050s 00:04:31.643 sys 0m0.028s 00:04:31.643 11:08:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.643 11:08:13 -- common/autotest_common.sh@10 -- # set +x 00:04:31.643 11:08:13 -- env/env.sh@26 -- # uname 00:04:31.643 11:08:13 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:31.643 11:08:13 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:31.643 11:08:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.643 11:08:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.643 11:08:13 -- common/autotest_common.sh@10 -- # set +x 00:04:31.643 ************************************ 00:04:31.643 START TEST env_mem_callbacks 00:04:31.643 ************************************ 00:04:31.643 11:08:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:31.902 EAL: Detected CPU lcores: 10 00:04:31.902 EAL: Detected NUMA nodes: 1 00:04:31.902 EAL: Detected shared linkage of DPDK 00:04:31.902 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.902 EAL: Selected IOVA mode 'PA' 00:04:31.902 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.902 00:04:31.902 00:04:31.902 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.902 http://cunit.sourceforge.net/ 00:04:31.902 00:04:31.902 00:04:31.902 Suite: memory 00:04:31.902 Test: test ... 00:04:31.902 register 0x200000200000 2097152 00:04:31.902 malloc 3145728 00:04:31.902 register 0x200000400000 4194304 00:04:31.902 buf 0x200000500000 len 3145728 PASSED 00:04:31.902 malloc 64 00:04:31.902 buf 0x2000004fff40 len 64 PASSED 00:04:31.902 malloc 4194304 00:04:31.902 register 0x200000800000 6291456 00:04:31.902 buf 0x200000a00000 len 4194304 PASSED 00:04:31.902 free 0x200000500000 3145728 00:04:31.902 free 0x2000004fff40 64 00:04:31.902 unregister 0x200000400000 4194304 PASSED 00:04:31.902 free 0x200000a00000 4194304 00:04:31.902 unregister 0x200000800000 6291456 PASSED 00:04:31.902 malloc 8388608 00:04:31.902 register 0x200000400000 10485760 00:04:31.902 buf 0x200000600000 len 8388608 PASSED 00:04:31.902 free 0x200000600000 8388608 00:04:31.902 unregister 0x200000400000 10485760 PASSED 00:04:31.902 passed 00:04:31.902 00:04:31.902 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.902 suites 1 1 n/a 0 0 00:04:31.902 tests 1 1 1 0 0 00:04:31.902 asserts 15 15 15 0 n/a 00:04:31.902 00:04:31.902 Elapsed time = 0.008 seconds 00:04:31.902 00:04:31.902 real 0m0.145s 00:04:31.902 user 0m0.018s 00:04:31.902 sys 0m0.024s 00:04:31.902 11:08:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.902 11:08:13 -- common/autotest_common.sh@10 -- # set +x 00:04:31.902 ************************************ 00:04:31.902 END TEST env_mem_callbacks 00:04:31.902 ************************************ 00:04:31.902 00:04:31.902 real 0m1.772s 00:04:31.902 user 0m0.851s 00:04:31.902 sys 0m0.567s 00:04:31.902 11:08:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.902 11:08:13 -- common/autotest_common.sh@10 -- # set +x 00:04:31.902 ************************************ 00:04:31.902 END TEST env 00:04:31.902 ************************************ 00:04:31.902 11:08:13 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:31.902 11:08:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.902 11:08:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.902 11:08:13 -- common/autotest_common.sh@10 -- # set +x 00:04:31.902 ************************************ 00:04:31.902 START TEST rpc 00:04:31.902 ************************************ 00:04:31.903 11:08:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.162 * Looking for test storage... 00:04:32.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.162 11:08:13 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:32.162 11:08:13 -- rpc/rpc.sh@65 -- # spdk_pid=53848 00:04:32.162 11:08:13 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.162 11:08:13 -- rpc/rpc.sh@67 -- # waitforlisten 53848 00:04:32.162 11:08:13 -- common/autotest_common.sh@819 -- # '[' -z 53848 ']' 00:04:32.162 11:08:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.162 11:08:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:32.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.162 11:08:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.162 11:08:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:32.162 11:08:13 -- common/autotest_common.sh@10 -- # set +x 00:04:32.162 [2024-10-13 11:08:13.605963] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:32.162 [2024-10-13 11:08:13.606060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53848 ] 00:04:32.162 [2024-10-13 11:08:13.743725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.421 [2024-10-13 11:08:13.810804] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:32.421 [2024-10-13 11:08:13.811194] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:32.421 [2024-10-13 11:08:13.811355] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 53848' to capture a snapshot of events at runtime. 00:04:32.421 [2024-10-13 11:08:13.811581] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid53848 for offline analysis/debug. 00:04:32.421 [2024-10-13 11:08:13.811817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.989 11:08:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:32.989 11:08:14 -- common/autotest_common.sh@852 -- # return 0 00:04:32.989 11:08:14 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.989 11:08:14 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.989 11:08:14 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:32.989 11:08:14 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:32.989 11:08:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.990 11:08:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.990 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:32.990 ************************************ 00:04:32.990 START TEST rpc_integrity 00:04:32.990 ************************************ 00:04:32.990 11:08:14 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:32.990 11:08:14 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:32.990 11:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.990 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:32.990 11:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.990 11:08:14 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:32.990 11:08:14 -- rpc/rpc.sh@13 -- # jq length 00:04:33.249 11:08:14 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.249 11:08:14 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.249 11:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.249 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:33.249 11:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.249 11:08:14 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:33.249 11:08:14 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.249 11:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.249 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:33.249 11:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.249 11:08:14 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.249 { 00:04:33.249 "name": "Malloc0", 00:04:33.249 "aliases": [ 00:04:33.249 "6e4662ad-f050-4f32-93d1-041ef5af2f11" 00:04:33.249 ], 00:04:33.249 "product_name": "Malloc disk", 00:04:33.249 "block_size": 512, 00:04:33.249 "num_blocks": 16384, 00:04:33.249 "uuid": "6e4662ad-f050-4f32-93d1-041ef5af2f11", 00:04:33.249 "assigned_rate_limits": { 00:04:33.249 "rw_ios_per_sec": 0, 00:04:33.249 "rw_mbytes_per_sec": 0, 00:04:33.249 "r_mbytes_per_sec": 0, 00:04:33.249 "w_mbytes_per_sec": 0 00:04:33.249 }, 00:04:33.249 "claimed": false, 00:04:33.249 "zoned": false, 00:04:33.249 "supported_io_types": { 00:04:33.249 "read": true, 00:04:33.249 "write": true, 00:04:33.249 "unmap": true, 00:04:33.249 "write_zeroes": true, 00:04:33.249 "flush": true, 00:04:33.249 "reset": true, 00:04:33.249 "compare": false, 00:04:33.249 "compare_and_write": false, 00:04:33.249 "abort": true, 00:04:33.249 "nvme_admin": false, 00:04:33.249 "nvme_io": false 00:04:33.249 }, 00:04:33.249 "memory_domains": [ 00:04:33.249 { 00:04:33.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.249 "dma_device_type": 2 00:04:33.249 } 00:04:33.249 ], 00:04:33.249 "driver_specific": {} 00:04:33.249 } 00:04:33.249 ]' 00:04:33.249 11:08:14 -- rpc/rpc.sh@17 -- # jq length 00:04:33.249 11:08:14 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.249 11:08:14 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:33.249 11:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.249 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:33.249 [2024-10-13 11:08:14.706835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:33.249 [2024-10-13 11:08:14.706891] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.249 [2024-10-13 11:08:14.706908] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa7a4c0 00:04:33.249 [2024-10-13 11:08:14.706916] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.249 [2024-10-13 11:08:14.708335] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.249 [2024-10-13 11:08:14.708390] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.249 Passthru0 00:04:33.249 11:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.249 11:08:14 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.249 11:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.249 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:33.249 11:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.249 11:08:14 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.249 { 00:04:33.249 "name": "Malloc0", 00:04:33.249 "aliases": [ 00:04:33.249 "6e4662ad-f050-4f32-93d1-041ef5af2f11" 00:04:33.249 ], 00:04:33.249 "product_name": "Malloc disk", 00:04:33.249 "block_size": 512, 00:04:33.249 "num_blocks": 16384, 00:04:33.249 "uuid": "6e4662ad-f050-4f32-93d1-041ef5af2f11", 00:04:33.249 "assigned_rate_limits": { 00:04:33.249 "rw_ios_per_sec": 0, 00:04:33.249 "rw_mbytes_per_sec": 0, 00:04:33.249 "r_mbytes_per_sec": 0, 00:04:33.249 "w_mbytes_per_sec": 0 00:04:33.249 }, 00:04:33.249 "claimed": true, 00:04:33.249 "claim_type": "exclusive_write", 00:04:33.249 "zoned": false, 00:04:33.249 "supported_io_types": { 00:04:33.249 "read": true, 00:04:33.249 "write": true, 00:04:33.249 "unmap": true, 00:04:33.249 "write_zeroes": true, 00:04:33.249 "flush": true, 00:04:33.249 "reset": true, 00:04:33.249 "compare": false, 00:04:33.249 "compare_and_write": false, 00:04:33.249 "abort": true, 00:04:33.249 "nvme_admin": false, 00:04:33.249 "nvme_io": false 00:04:33.249 }, 00:04:33.249 "memory_domains": [ 00:04:33.249 { 00:04:33.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.249 "dma_device_type": 2 00:04:33.249 } 00:04:33.249 ], 00:04:33.249 "driver_specific": {} 00:04:33.249 }, 00:04:33.249 { 00:04:33.249 "name": "Passthru0", 00:04:33.249 "aliases": [ 00:04:33.249 "8601dde8-d1c2-586e-bb2a-a3d093a6e270" 00:04:33.249 ], 00:04:33.249 "product_name": "passthru", 00:04:33.249 "block_size": 512, 00:04:33.249 "num_blocks": 16384, 00:04:33.249 "uuid": "8601dde8-d1c2-586e-bb2a-a3d093a6e270", 00:04:33.249 "assigned_rate_limits": { 00:04:33.249 "rw_ios_per_sec": 0, 00:04:33.249 "rw_mbytes_per_sec": 0, 00:04:33.249 "r_mbytes_per_sec": 0, 00:04:33.249 "w_mbytes_per_sec": 0 00:04:33.249 }, 00:04:33.249 "claimed": false, 00:04:33.249 "zoned": false, 00:04:33.249 "supported_io_types": { 00:04:33.249 "read": true, 00:04:33.249 "write": true, 00:04:33.249 "unmap": true, 00:04:33.249 "write_zeroes": true, 00:04:33.249 "flush": true, 00:04:33.249 "reset": true, 00:04:33.249 "compare": false, 00:04:33.249 "compare_and_write": false, 00:04:33.249 "abort": true, 00:04:33.249 "nvme_admin": false, 00:04:33.249 "nvme_io": false 00:04:33.249 }, 00:04:33.249 "memory_domains": [ 00:04:33.249 { 00:04:33.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.249 "dma_device_type": 2 00:04:33.249 } 00:04:33.249 ], 00:04:33.249 "driver_specific": { 00:04:33.249 "passthru": { 00:04:33.249 "name": "Passthru0", 00:04:33.249 "base_bdev_name": "Malloc0" 00:04:33.249 } 00:04:33.249 } 00:04:33.249 } 00:04:33.249 ]' 00:04:33.249 11:08:14 -- rpc/rpc.sh@21 -- # jq length 00:04:33.249 11:08:14 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.249 11:08:14 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.249 11:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.249 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:33.249 11:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.249 11:08:14 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:33.249 11:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.249 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:33.249 11:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.249 11:08:14 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.249 11:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.249 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:33.249 11:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.249 11:08:14 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.249 11:08:14 -- rpc/rpc.sh@26 -- # jq length 00:04:33.509 ************************************ 00:04:33.509 END TEST rpc_integrity 00:04:33.509 ************************************ 00:04:33.509 11:08:14 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:33.509 00:04:33.509 real 0m0.313s 00:04:33.509 user 0m0.213s 00:04:33.509 sys 0m0.031s 00:04:33.509 11:08:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.509 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:33.509 11:08:14 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:33.509 11:08:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.509 11:08:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.509 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:33.509 ************************************ 00:04:33.509 START TEST rpc_plugins 00:04:33.509 ************************************ 00:04:33.509 11:08:14 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:33.509 11:08:14 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:33.509 11:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.509 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:33.509 11:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.509 11:08:14 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:33.509 11:08:14 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:33.509 11:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.509 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:33.509 11:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.509 11:08:14 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:33.509 { 00:04:33.509 "name": "Malloc1", 00:04:33.509 "aliases": [ 00:04:33.509 "434c95fd-bc71-4620-961c-db555cbcbdb3" 00:04:33.509 ], 00:04:33.509 "product_name": "Malloc disk", 00:04:33.509 "block_size": 4096, 00:04:33.509 "num_blocks": 256, 00:04:33.509 "uuid": "434c95fd-bc71-4620-961c-db555cbcbdb3", 00:04:33.509 "assigned_rate_limits": { 00:04:33.509 "rw_ios_per_sec": 0, 00:04:33.509 "rw_mbytes_per_sec": 0, 00:04:33.509 "r_mbytes_per_sec": 0, 00:04:33.509 "w_mbytes_per_sec": 0 00:04:33.509 }, 00:04:33.509 "claimed": false, 00:04:33.509 "zoned": false, 00:04:33.509 "supported_io_types": { 00:04:33.509 "read": true, 00:04:33.509 "write": true, 00:04:33.509 "unmap": true, 00:04:33.509 "write_zeroes": true, 00:04:33.509 "flush": true, 00:04:33.509 "reset": true, 00:04:33.509 "compare": false, 00:04:33.509 "compare_and_write": false, 00:04:33.509 "abort": true, 00:04:33.509 "nvme_admin": false, 00:04:33.509 "nvme_io": false 00:04:33.509 }, 00:04:33.509 "memory_domains": [ 00:04:33.509 { 00:04:33.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.509 "dma_device_type": 2 00:04:33.509 } 00:04:33.509 ], 00:04:33.509 "driver_specific": {} 00:04:33.509 } 00:04:33.509 ]' 00:04:33.509 11:08:14 -- rpc/rpc.sh@32 -- # jq length 00:04:33.509 11:08:15 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:33.509 11:08:15 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:33.509 11:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.509 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.509 11:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.509 11:08:15 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:33.509 11:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.509 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.509 11:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.509 11:08:15 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:33.509 11:08:15 -- rpc/rpc.sh@36 -- # jq length 00:04:33.509 ************************************ 00:04:33.509 END TEST rpc_plugins 00:04:33.509 ************************************ 00:04:33.509 11:08:15 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:33.509 00:04:33.509 real 0m0.163s 00:04:33.509 user 0m0.114s 00:04:33.509 sys 0m0.014s 00:04:33.509 11:08:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.509 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.768 11:08:15 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:33.768 11:08:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.768 11:08:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.768 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.768 ************************************ 00:04:33.768 START TEST rpc_trace_cmd_test 00:04:33.768 ************************************ 00:04:33.768 11:08:15 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:33.768 11:08:15 -- rpc/rpc.sh@40 -- # local info 00:04:33.768 11:08:15 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:33.768 11:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.768 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.768 11:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.768 11:08:15 -- rpc/rpc.sh@42 -- # info='{ 00:04:33.768 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid53848", 00:04:33.768 "tpoint_group_mask": "0x8", 00:04:33.768 "iscsi_conn": { 00:04:33.768 "mask": "0x2", 00:04:33.768 "tpoint_mask": "0x0" 00:04:33.768 }, 00:04:33.768 "scsi": { 00:04:33.768 "mask": "0x4", 00:04:33.768 "tpoint_mask": "0x0" 00:04:33.768 }, 00:04:33.768 "bdev": { 00:04:33.768 "mask": "0x8", 00:04:33.768 "tpoint_mask": "0xffffffffffffffff" 00:04:33.768 }, 00:04:33.768 "nvmf_rdma": { 00:04:33.768 "mask": "0x10", 00:04:33.768 "tpoint_mask": "0x0" 00:04:33.768 }, 00:04:33.768 "nvmf_tcp": { 00:04:33.768 "mask": "0x20", 00:04:33.768 "tpoint_mask": "0x0" 00:04:33.768 }, 00:04:33.768 "ftl": { 00:04:33.768 "mask": "0x40", 00:04:33.768 "tpoint_mask": "0x0" 00:04:33.768 }, 00:04:33.768 "blobfs": { 00:04:33.768 "mask": "0x80", 00:04:33.768 "tpoint_mask": "0x0" 00:04:33.768 }, 00:04:33.768 "dsa": { 00:04:33.768 "mask": "0x200", 00:04:33.768 "tpoint_mask": "0x0" 00:04:33.768 }, 00:04:33.768 "thread": { 00:04:33.768 "mask": "0x400", 00:04:33.768 "tpoint_mask": "0x0" 00:04:33.768 }, 00:04:33.768 "nvme_pcie": { 00:04:33.768 "mask": "0x800", 00:04:33.768 "tpoint_mask": "0x0" 00:04:33.768 }, 00:04:33.768 "iaa": { 00:04:33.768 "mask": "0x1000", 00:04:33.768 "tpoint_mask": "0x0" 00:04:33.768 }, 00:04:33.768 "nvme_tcp": { 00:04:33.768 "mask": "0x2000", 00:04:33.768 "tpoint_mask": "0x0" 00:04:33.768 }, 00:04:33.768 "bdev_nvme": { 00:04:33.768 "mask": "0x4000", 00:04:33.768 "tpoint_mask": "0x0" 00:04:33.768 } 00:04:33.768 }' 00:04:33.768 11:08:15 -- rpc/rpc.sh@43 -- # jq length 00:04:33.768 11:08:15 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:33.768 11:08:15 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:33.768 11:08:15 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:33.768 11:08:15 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:33.768 11:08:15 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:33.768 11:08:15 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:34.034 11:08:15 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:34.034 11:08:15 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:34.034 ************************************ 00:04:34.034 END TEST rpc_trace_cmd_test 00:04:34.034 ************************************ 00:04:34.034 11:08:15 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:34.034 00:04:34.034 real 0m0.268s 00:04:34.034 user 0m0.230s 00:04:34.034 sys 0m0.028s 00:04:34.034 11:08:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.034 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.034 11:08:15 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:34.034 11:08:15 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:34.034 11:08:15 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:34.034 11:08:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.034 11:08:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.034 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.034 ************************************ 00:04:34.034 START TEST rpc_daemon_integrity 00:04:34.034 ************************************ 00:04:34.034 11:08:15 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:34.034 11:08:15 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.034 11:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.034 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.034 11:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.034 11:08:15 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.034 11:08:15 -- rpc/rpc.sh@13 -- # jq length 00:04:34.034 11:08:15 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.034 11:08:15 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.034 11:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.034 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.034 11:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.034 11:08:15 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:34.034 11:08:15 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.034 11:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.034 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.034 11:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.034 11:08:15 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.034 { 00:04:34.034 "name": "Malloc2", 00:04:34.034 "aliases": [ 00:04:34.034 "d0961bd2-462f-4e25-8b0a-13cdcff98821" 00:04:34.034 ], 00:04:34.034 "product_name": "Malloc disk", 00:04:34.034 "block_size": 512, 00:04:34.034 "num_blocks": 16384, 00:04:34.034 "uuid": "d0961bd2-462f-4e25-8b0a-13cdcff98821", 00:04:34.034 "assigned_rate_limits": { 00:04:34.034 "rw_ios_per_sec": 0, 00:04:34.034 "rw_mbytes_per_sec": 0, 00:04:34.034 "r_mbytes_per_sec": 0, 00:04:34.034 "w_mbytes_per_sec": 0 00:04:34.034 }, 00:04:34.034 "claimed": false, 00:04:34.034 "zoned": false, 00:04:34.034 "supported_io_types": { 00:04:34.034 "read": true, 00:04:34.034 "write": true, 00:04:34.034 "unmap": true, 00:04:34.034 "write_zeroes": true, 00:04:34.034 "flush": true, 00:04:34.034 "reset": true, 00:04:34.034 "compare": false, 00:04:34.034 "compare_and_write": false, 00:04:34.034 "abort": true, 00:04:34.034 "nvme_admin": false, 00:04:34.034 "nvme_io": false 00:04:34.034 }, 00:04:34.034 "memory_domains": [ 00:04:34.034 { 00:04:34.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.034 "dma_device_type": 2 00:04:34.034 } 00:04:34.034 ], 00:04:34.034 "driver_specific": {} 00:04:34.034 } 00:04:34.034 ]' 00:04:34.034 11:08:15 -- rpc/rpc.sh@17 -- # jq length 00:04:34.034 11:08:15 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.034 11:08:15 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:34.034 11:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.034 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.034 [2024-10-13 11:08:15.615232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:34.034 [2024-10-13 11:08:15.615290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.034 [2024-10-13 11:08:15.615306] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa7ac40 00:04:34.034 [2024-10-13 11:08:15.615314] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.034 [2024-10-13 11:08:15.616618] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.034 [2024-10-13 11:08:15.616652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.034 Passthru0 00:04:34.034 11:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.034 11:08:15 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.034 11:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.034 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.296 11:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.296 11:08:15 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.296 { 00:04:34.296 "name": "Malloc2", 00:04:34.296 "aliases": [ 00:04:34.296 "d0961bd2-462f-4e25-8b0a-13cdcff98821" 00:04:34.296 ], 00:04:34.296 "product_name": "Malloc disk", 00:04:34.296 "block_size": 512, 00:04:34.296 "num_blocks": 16384, 00:04:34.296 "uuid": "d0961bd2-462f-4e25-8b0a-13cdcff98821", 00:04:34.296 "assigned_rate_limits": { 00:04:34.296 "rw_ios_per_sec": 0, 00:04:34.296 "rw_mbytes_per_sec": 0, 00:04:34.296 "r_mbytes_per_sec": 0, 00:04:34.296 "w_mbytes_per_sec": 0 00:04:34.296 }, 00:04:34.296 "claimed": true, 00:04:34.296 "claim_type": "exclusive_write", 00:04:34.296 "zoned": false, 00:04:34.296 "supported_io_types": { 00:04:34.296 "read": true, 00:04:34.296 "write": true, 00:04:34.296 "unmap": true, 00:04:34.296 "write_zeroes": true, 00:04:34.296 "flush": true, 00:04:34.296 "reset": true, 00:04:34.296 "compare": false, 00:04:34.296 "compare_and_write": false, 00:04:34.296 "abort": true, 00:04:34.296 "nvme_admin": false, 00:04:34.296 "nvme_io": false 00:04:34.296 }, 00:04:34.296 "memory_domains": [ 00:04:34.296 { 00:04:34.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.296 "dma_device_type": 2 00:04:34.296 } 00:04:34.296 ], 00:04:34.296 "driver_specific": {} 00:04:34.296 }, 00:04:34.296 { 00:04:34.296 "name": "Passthru0", 00:04:34.296 "aliases": [ 00:04:34.296 "6382ac34-99af-5c49-abaf-3cd6baa0eec4" 00:04:34.296 ], 00:04:34.296 "product_name": "passthru", 00:04:34.296 "block_size": 512, 00:04:34.296 "num_blocks": 16384, 00:04:34.296 "uuid": "6382ac34-99af-5c49-abaf-3cd6baa0eec4", 00:04:34.296 "assigned_rate_limits": { 00:04:34.296 "rw_ios_per_sec": 0, 00:04:34.296 "rw_mbytes_per_sec": 0, 00:04:34.296 "r_mbytes_per_sec": 0, 00:04:34.296 "w_mbytes_per_sec": 0 00:04:34.296 }, 00:04:34.296 "claimed": false, 00:04:34.296 "zoned": false, 00:04:34.296 "supported_io_types": { 00:04:34.296 "read": true, 00:04:34.296 "write": true, 00:04:34.296 "unmap": true, 00:04:34.296 "write_zeroes": true, 00:04:34.296 "flush": true, 00:04:34.296 "reset": true, 00:04:34.296 "compare": false, 00:04:34.296 "compare_and_write": false, 00:04:34.296 "abort": true, 00:04:34.296 "nvme_admin": false, 00:04:34.296 "nvme_io": false 00:04:34.296 }, 00:04:34.296 "memory_domains": [ 00:04:34.296 { 00:04:34.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.296 "dma_device_type": 2 00:04:34.296 } 00:04:34.296 ], 00:04:34.296 "driver_specific": { 00:04:34.296 "passthru": { 00:04:34.296 "name": "Passthru0", 00:04:34.296 "base_bdev_name": "Malloc2" 00:04:34.296 } 00:04:34.296 } 00:04:34.296 } 00:04:34.296 ]' 00:04:34.296 11:08:15 -- rpc/rpc.sh@21 -- # jq length 00:04:34.296 11:08:15 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.296 11:08:15 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.296 11:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.296 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.296 11:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.296 11:08:15 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:34.296 11:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.296 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.296 11:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.296 11:08:15 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.296 11:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.296 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.296 11:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.296 11:08:15 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.296 11:08:15 -- rpc/rpc.sh@26 -- # jq length 00:04:34.296 ************************************ 00:04:34.296 END TEST rpc_daemon_integrity 00:04:34.296 ************************************ 00:04:34.296 11:08:15 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.296 00:04:34.296 real 0m0.319s 00:04:34.296 user 0m0.212s 00:04:34.296 sys 0m0.040s 00:04:34.296 11:08:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.296 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.296 11:08:15 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:34.296 11:08:15 -- rpc/rpc.sh@84 -- # killprocess 53848 00:04:34.296 11:08:15 -- common/autotest_common.sh@926 -- # '[' -z 53848 ']' 00:04:34.296 11:08:15 -- common/autotest_common.sh@930 -- # kill -0 53848 00:04:34.296 11:08:15 -- common/autotest_common.sh@931 -- # uname 00:04:34.296 11:08:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:34.296 11:08:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53848 00:04:34.296 killing process with pid 53848 00:04:34.296 11:08:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:34.296 11:08:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:34.296 11:08:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53848' 00:04:34.296 11:08:15 -- common/autotest_common.sh@945 -- # kill 53848 00:04:34.296 11:08:15 -- common/autotest_common.sh@950 -- # wait 53848 00:04:34.555 00:04:34.555 real 0m2.637s 00:04:34.555 user 0m3.557s 00:04:34.555 sys 0m0.544s 00:04:34.555 11:08:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.555 ************************************ 00:04:34.555 END TEST rpc 00:04:34.555 ************************************ 00:04:34.555 11:08:16 -- common/autotest_common.sh@10 -- # set +x 00:04:34.814 11:08:16 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:34.814 11:08:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.814 11:08:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.814 11:08:16 -- common/autotest_common.sh@10 -- # set +x 00:04:34.814 ************************************ 00:04:34.814 START TEST rpc_client 00:04:34.814 ************************************ 00:04:34.814 11:08:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:34.814 * Looking for test storage... 00:04:34.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:34.814 11:08:16 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:34.814 OK 00:04:34.814 11:08:16 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:34.814 00:04:34.814 real 0m0.099s 00:04:34.814 user 0m0.045s 00:04:34.814 sys 0m0.059s 00:04:34.814 ************************************ 00:04:34.814 END TEST rpc_client 00:04:34.814 ************************************ 00:04:34.814 11:08:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.814 11:08:16 -- common/autotest_common.sh@10 -- # set +x 00:04:34.814 11:08:16 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:34.814 11:08:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.815 11:08:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.815 11:08:16 -- common/autotest_common.sh@10 -- # set +x 00:04:34.815 ************************************ 00:04:34.815 START TEST json_config 00:04:34.815 ************************************ 00:04:34.815 11:08:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:34.815 11:08:16 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:34.815 11:08:16 -- nvmf/common.sh@7 -- # uname -s 00:04:34.815 11:08:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:34.815 11:08:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:34.815 11:08:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:34.815 11:08:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:34.815 11:08:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:34.815 11:08:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:34.815 11:08:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:34.815 11:08:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:34.815 11:08:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:34.815 11:08:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:34.815 11:08:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:04:34.815 11:08:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:04:34.815 11:08:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:34.815 11:08:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:34.815 11:08:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:34.815 11:08:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:34.815 11:08:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:34.815 11:08:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:34.815 11:08:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:34.815 11:08:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.815 11:08:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.815 11:08:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.815 11:08:16 -- paths/export.sh@5 -- # export PATH 00:04:34.815 11:08:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.815 11:08:16 -- nvmf/common.sh@46 -- # : 0 00:04:34.815 11:08:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:34.815 11:08:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:34.815 11:08:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:34.815 11:08:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:34.815 11:08:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:34.815 11:08:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:34.815 11:08:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:34.815 11:08:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:34.815 11:08:16 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:34.815 11:08:16 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:34.815 11:08:16 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:34.815 11:08:16 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:34.815 11:08:16 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:34.815 11:08:16 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:34.815 11:08:16 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:34.815 11:08:16 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:34.815 11:08:16 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:34.815 11:08:16 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:34.815 11:08:16 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:34.815 11:08:16 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:34.815 11:08:16 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:34.815 INFO: JSON configuration test init 00:04:34.815 11:08:16 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:34.815 11:08:16 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:34.815 11:08:16 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:34.815 11:08:16 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:34.815 11:08:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:34.815 11:08:16 -- common/autotest_common.sh@10 -- # set +x 00:04:34.815 11:08:16 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:34.815 11:08:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:34.815 11:08:16 -- common/autotest_common.sh@10 -- # set +x 00:04:34.815 Waiting for target to run... 00:04:34.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:34.815 11:08:16 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:34.815 11:08:16 -- json_config/json_config.sh@98 -- # local app=target 00:04:34.815 11:08:16 -- json_config/json_config.sh@99 -- # shift 00:04:34.815 11:08:16 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:34.815 11:08:16 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:34.815 11:08:16 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:34.815 11:08:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:34.815 11:08:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:34.815 11:08:16 -- json_config/json_config.sh@111 -- # app_pid[$app]=54079 00:04:34.815 11:08:16 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:34.815 11:08:16 -- json_config/json_config.sh@114 -- # waitforlisten 54079 /var/tmp/spdk_tgt.sock 00:04:34.815 11:08:16 -- common/autotest_common.sh@819 -- # '[' -z 54079 ']' 00:04:34.815 11:08:16 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:34.815 11:08:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:34.815 11:08:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:34.815 11:08:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:34.815 11:08:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:34.815 11:08:16 -- common/autotest_common.sh@10 -- # set +x 00:04:35.074 [2024-10-13 11:08:16.474949] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:35.074 [2024-10-13 11:08:16.475231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54079 ] 00:04:35.333 [2024-10-13 11:08:16.784400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.333 [2024-10-13 11:08:16.837804] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:35.333 [2024-10-13 11:08:16.838231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.900 11:08:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:35.900 11:08:17 -- common/autotest_common.sh@852 -- # return 0 00:04:35.900 11:08:17 -- json_config/json_config.sh@115 -- # echo '' 00:04:35.900 00:04:35.900 11:08:17 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:35.900 11:08:17 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:35.900 11:08:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:35.900 11:08:17 -- common/autotest_common.sh@10 -- # set +x 00:04:35.900 11:08:17 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:35.900 11:08:17 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:35.900 11:08:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:35.900 11:08:17 -- common/autotest_common.sh@10 -- # set +x 00:04:36.159 11:08:17 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:36.159 11:08:17 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:36.159 11:08:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:36.418 11:08:17 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:36.418 11:08:17 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:36.418 11:08:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:36.418 11:08:17 -- common/autotest_common.sh@10 -- # set +x 00:04:36.418 11:08:17 -- json_config/json_config.sh@48 -- # local ret=0 00:04:36.418 11:08:17 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:36.418 11:08:17 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:36.418 11:08:17 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:36.418 11:08:17 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:36.418 11:08:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:36.677 11:08:18 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:36.677 11:08:18 -- json_config/json_config.sh@51 -- # local get_types 00:04:36.677 11:08:18 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:36.677 11:08:18 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:36.677 11:08:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:36.677 11:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:36.936 11:08:18 -- json_config/json_config.sh@58 -- # return 0 00:04:36.936 11:08:18 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:36.936 11:08:18 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:36.936 11:08:18 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:36.936 11:08:18 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:36.936 11:08:18 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:36.936 11:08:18 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:36.936 11:08:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:36.936 11:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:36.936 11:08:18 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:36.936 11:08:18 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:36.936 11:08:18 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:36.936 11:08:18 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:36.936 11:08:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:37.195 MallocForNvmf0 00:04:37.195 11:08:18 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:37.195 11:08:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:37.195 MallocForNvmf1 00:04:37.466 11:08:18 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:37.466 11:08:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:37.466 [2024-10-13 11:08:19.047023] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:37.738 11:08:19 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:37.738 11:08:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:37.996 11:08:19 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:37.996 11:08:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:37.996 11:08:19 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:37.996 11:08:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:38.255 11:08:19 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:38.255 11:08:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:38.514 [2024-10-13 11:08:19.987541] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:38.514 11:08:20 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:38.514 11:08:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:38.514 11:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:38.514 11:08:20 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:38.514 11:08:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:38.514 11:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:38.514 11:08:20 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:38.514 11:08:20 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:38.514 11:08:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:38.773 MallocBdevForConfigChangeCheck 00:04:38.773 11:08:20 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:38.773 11:08:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:38.773 11:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:39.032 11:08:20 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:39.032 11:08:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.291 INFO: shutting down applications... 00:04:39.291 11:08:20 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:39.291 11:08:20 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:39.291 11:08:20 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:39.291 11:08:20 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:39.291 11:08:20 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:39.550 Calling clear_iscsi_subsystem 00:04:39.550 Calling clear_nvmf_subsystem 00:04:39.550 Calling clear_nbd_subsystem 00:04:39.550 Calling clear_ublk_subsystem 00:04:39.550 Calling clear_vhost_blk_subsystem 00:04:39.550 Calling clear_vhost_scsi_subsystem 00:04:39.550 Calling clear_scheduler_subsystem 00:04:39.550 Calling clear_bdev_subsystem 00:04:39.550 Calling clear_accel_subsystem 00:04:39.550 Calling clear_vmd_subsystem 00:04:39.550 Calling clear_sock_subsystem 00:04:39.550 Calling clear_iobuf_subsystem 00:04:39.550 11:08:21 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:39.550 11:08:21 -- json_config/json_config.sh@396 -- # count=100 00:04:39.550 11:08:21 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:39.550 11:08:21 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.550 11:08:21 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:39.550 11:08:21 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:40.118 11:08:21 -- json_config/json_config.sh@398 -- # break 00:04:40.118 11:08:21 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:40.118 11:08:21 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:40.118 11:08:21 -- json_config/json_config.sh@120 -- # local app=target 00:04:40.118 11:08:21 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:40.118 11:08:21 -- json_config/json_config.sh@124 -- # [[ -n 54079 ]] 00:04:40.118 11:08:21 -- json_config/json_config.sh@127 -- # kill -SIGINT 54079 00:04:40.118 11:08:21 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:40.118 11:08:21 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:40.118 11:08:21 -- json_config/json_config.sh@130 -- # kill -0 54079 00:04:40.118 11:08:21 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:40.686 11:08:21 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:40.686 11:08:21 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:40.686 11:08:21 -- json_config/json_config.sh@130 -- # kill -0 54079 00:04:40.686 11:08:21 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:40.686 SPDK target shutdown done 00:04:40.686 INFO: relaunching applications... 00:04:40.686 11:08:21 -- json_config/json_config.sh@132 -- # break 00:04:40.686 11:08:21 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:40.686 11:08:21 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:40.686 11:08:21 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:40.686 11:08:21 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:40.686 11:08:21 -- json_config/json_config.sh@98 -- # local app=target 00:04:40.686 11:08:21 -- json_config/json_config.sh@99 -- # shift 00:04:40.686 Waiting for target to run... 00:04:40.686 11:08:21 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:40.686 11:08:21 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:40.686 11:08:21 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:40.686 11:08:21 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:40.686 11:08:21 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:40.686 11:08:21 -- json_config/json_config.sh@111 -- # app_pid[$app]=54270 00:04:40.686 11:08:21 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:40.686 11:08:21 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:40.686 11:08:21 -- json_config/json_config.sh@114 -- # waitforlisten 54270 /var/tmp/spdk_tgt.sock 00:04:40.686 11:08:21 -- common/autotest_common.sh@819 -- # '[' -z 54270 ']' 00:04:40.687 11:08:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:40.687 11:08:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:40.687 11:08:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:40.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:40.687 11:08:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:40.687 11:08:21 -- common/autotest_common.sh@10 -- # set +x 00:04:40.687 [2024-10-13 11:08:22.063710] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:40.687 [2024-10-13 11:08:22.064015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54270 ] 00:04:40.945 [2024-10-13 11:08:22.378240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.945 [2024-10-13 11:08:22.422962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:40.945 [2024-10-13 11:08:22.423375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.205 [2024-10-13 11:08:22.723865] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:41.205 [2024-10-13 11:08:22.755953] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:41.463 11:08:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:41.463 00:04:41.463 INFO: Checking if target configuration is the same... 00:04:41.463 11:08:23 -- common/autotest_common.sh@852 -- # return 0 00:04:41.463 11:08:23 -- json_config/json_config.sh@115 -- # echo '' 00:04:41.463 11:08:23 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:41.463 11:08:23 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:41.463 11:08:23 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:41.463 11:08:23 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:41.463 11:08:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.464 + '[' 2 -ne 2 ']' 00:04:41.464 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:41.464 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:41.464 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:41.464 +++ basename /dev/fd/62 00:04:41.464 ++ mktemp /tmp/62.XXX 00:04:41.723 + tmp_file_1=/tmp/62.SA9 00:04:41.723 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:41.723 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:41.723 + tmp_file_2=/tmp/spdk_tgt_config.json.X2R 00:04:41.723 + ret=0 00:04:41.723 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:41.987 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:41.987 + diff -u /tmp/62.SA9 /tmp/spdk_tgt_config.json.X2R 00:04:41.987 INFO: JSON config files are the same 00:04:41.987 + echo 'INFO: JSON config files are the same' 00:04:41.987 + rm /tmp/62.SA9 /tmp/spdk_tgt_config.json.X2R 00:04:41.987 + exit 0 00:04:41.987 INFO: changing configuration and checking if this can be detected... 00:04:41.987 11:08:23 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:41.987 11:08:23 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:41.987 11:08:23 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:41.987 11:08:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:42.246 11:08:23 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.246 11:08:23 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:42.246 11:08:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.246 + '[' 2 -ne 2 ']' 00:04:42.246 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:42.246 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:42.246 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:42.246 +++ basename /dev/fd/62 00:04:42.246 ++ mktemp /tmp/62.XXX 00:04:42.246 + tmp_file_1=/tmp/62.3k7 00:04:42.246 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.246 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:42.246 + tmp_file_2=/tmp/spdk_tgt_config.json.n7e 00:04:42.246 + ret=0 00:04:42.246 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:42.818 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:42.818 + diff -u /tmp/62.3k7 /tmp/spdk_tgt_config.json.n7e 00:04:42.818 + ret=1 00:04:42.818 + echo '=== Start of file: /tmp/62.3k7 ===' 00:04:42.818 + cat /tmp/62.3k7 00:04:42.818 + echo '=== End of file: /tmp/62.3k7 ===' 00:04:42.818 + echo '' 00:04:42.818 + echo '=== Start of file: /tmp/spdk_tgt_config.json.n7e ===' 00:04:42.818 + cat /tmp/spdk_tgt_config.json.n7e 00:04:42.818 + echo '=== End of file: /tmp/spdk_tgt_config.json.n7e ===' 00:04:42.818 + echo '' 00:04:42.818 + rm /tmp/62.3k7 /tmp/spdk_tgt_config.json.n7e 00:04:42.818 + exit 1 00:04:42.818 INFO: configuration change detected. 00:04:42.818 11:08:24 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:42.818 11:08:24 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:42.818 11:08:24 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:42.818 11:08:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:42.818 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:04:42.818 11:08:24 -- json_config/json_config.sh@360 -- # local ret=0 00:04:42.818 11:08:24 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:42.818 11:08:24 -- json_config/json_config.sh@370 -- # [[ -n 54270 ]] 00:04:42.818 11:08:24 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:42.818 11:08:24 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:42.818 11:08:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:42.818 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:04:42.818 11:08:24 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:42.818 11:08:24 -- json_config/json_config.sh@246 -- # uname -s 00:04:42.818 11:08:24 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:42.818 11:08:24 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:42.818 11:08:24 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:42.818 11:08:24 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:42.818 11:08:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:42.818 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:04:42.818 11:08:24 -- json_config/json_config.sh@376 -- # killprocess 54270 00:04:42.818 11:08:24 -- common/autotest_common.sh@926 -- # '[' -z 54270 ']' 00:04:42.818 11:08:24 -- common/autotest_common.sh@930 -- # kill -0 54270 00:04:42.818 11:08:24 -- common/autotest_common.sh@931 -- # uname 00:04:42.818 11:08:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:42.818 11:08:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54270 00:04:42.818 killing process with pid 54270 00:04:42.818 11:08:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:42.818 11:08:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:42.818 11:08:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54270' 00:04:42.818 11:08:24 -- common/autotest_common.sh@945 -- # kill 54270 00:04:42.818 11:08:24 -- common/autotest_common.sh@950 -- # wait 54270 00:04:43.078 11:08:24 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:43.078 11:08:24 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:43.078 11:08:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:43.078 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:04:43.078 INFO: Success 00:04:43.078 11:08:24 -- json_config/json_config.sh@381 -- # return 0 00:04:43.078 11:08:24 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:43.078 00:04:43.078 real 0m8.332s 00:04:43.078 user 0m12.198s 00:04:43.078 sys 0m1.452s 00:04:43.078 11:08:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.078 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:04:43.078 ************************************ 00:04:43.078 END TEST json_config 00:04:43.078 ************************************ 00:04:43.337 11:08:24 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:43.337 11:08:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.337 11:08:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.337 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:04:43.337 ************************************ 00:04:43.337 START TEST json_config_extra_key 00:04:43.337 ************************************ 00:04:43.337 11:08:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:43.337 11:08:24 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.337 11:08:24 -- nvmf/common.sh@7 -- # uname -s 00:04:43.337 11:08:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.337 11:08:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.337 11:08:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.337 11:08:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.337 11:08:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.337 11:08:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.337 11:08:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.337 11:08:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.337 11:08:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.337 11:08:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.337 11:08:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:04:43.337 11:08:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:04:43.337 11:08:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.337 11:08:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.337 11:08:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.337 11:08:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.337 11:08:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.337 11:08:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.337 11:08:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.337 11:08:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.337 11:08:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.337 11:08:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.337 11:08:24 -- paths/export.sh@5 -- # export PATH 00:04:43.337 11:08:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.337 11:08:24 -- nvmf/common.sh@46 -- # : 0 00:04:43.337 11:08:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:43.337 11:08:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:43.337 11:08:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:43.337 11:08:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.337 11:08:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.337 11:08:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:43.337 11:08:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:43.337 11:08:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:43.337 11:08:24 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:43.337 11:08:24 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:43.338 INFO: launching applications... 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=54415 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:43.338 Waiting for target to run... 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:43.338 11:08:24 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 54415 /var/tmp/spdk_tgt.sock 00:04:43.338 11:08:24 -- common/autotest_common.sh@819 -- # '[' -z 54415 ']' 00:04:43.338 11:08:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:43.338 11:08:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:43.338 11:08:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:43.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:43.338 11:08:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:43.338 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:04:43.338 [2024-10-13 11:08:24.847774] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:43.338 [2024-10-13 11:08:24.847870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54415 ] 00:04:43.597 [2024-10-13 11:08:25.142304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.597 [2024-10-13 11:08:25.186137] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:43.597 [2024-10-13 11:08:25.186603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.576 00:04:44.576 INFO: shutting down applications... 00:04:44.576 11:08:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:44.576 11:08:25 -- common/autotest_common.sh@852 -- # return 0 00:04:44.576 11:08:25 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:44.576 11:08:25 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:44.576 11:08:25 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:44.576 11:08:25 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:44.576 11:08:25 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:44.576 11:08:25 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 54415 ]] 00:04:44.576 11:08:25 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 54415 00:04:44.577 11:08:25 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:44.577 11:08:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:44.577 11:08:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54415 00:04:44.577 11:08:25 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:44.835 11:08:26 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:44.835 11:08:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:44.835 11:08:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54415 00:04:44.835 11:08:26 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:44.835 11:08:26 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:44.835 11:08:26 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:44.835 11:08:26 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:44.836 SPDK target shutdown done 00:04:44.836 11:08:26 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:44.836 Success 00:04:45.095 00:04:45.095 real 0m1.728s 00:04:45.095 user 0m1.716s 00:04:45.095 sys 0m0.295s 00:04:45.095 11:08:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.095 ************************************ 00:04:45.095 END TEST json_config_extra_key 00:04:45.095 ************************************ 00:04:45.095 11:08:26 -- common/autotest_common.sh@10 -- # set +x 00:04:45.095 11:08:26 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.095 11:08:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.095 11:08:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.095 11:08:26 -- common/autotest_common.sh@10 -- # set +x 00:04:45.095 ************************************ 00:04:45.095 START TEST alias_rpc 00:04:45.095 ************************************ 00:04:45.095 11:08:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.095 * Looking for test storage... 00:04:45.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:45.095 11:08:26 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:45.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.095 11:08:26 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=54473 00:04:45.095 11:08:26 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 54473 00:04:45.095 11:08:26 -- common/autotest_common.sh@819 -- # '[' -z 54473 ']' 00:04:45.095 11:08:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.095 11:08:26 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.095 11:08:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:45.095 11:08:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.095 11:08:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:45.095 11:08:26 -- common/autotest_common.sh@10 -- # set +x 00:04:45.095 [2024-10-13 11:08:26.636363] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:45.095 [2024-10-13 11:08:26.636473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54473 ] 00:04:45.353 [2024-10-13 11:08:26.776082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.353 [2024-10-13 11:08:26.846930] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:45.354 [2024-10-13 11:08:26.847143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.289 11:08:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:46.289 11:08:27 -- common/autotest_common.sh@852 -- # return 0 00:04:46.289 11:08:27 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:46.548 11:08:27 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 54473 00:04:46.548 11:08:27 -- common/autotest_common.sh@926 -- # '[' -z 54473 ']' 00:04:46.548 11:08:27 -- common/autotest_common.sh@930 -- # kill -0 54473 00:04:46.548 11:08:27 -- common/autotest_common.sh@931 -- # uname 00:04:46.548 11:08:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:46.548 11:08:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54473 00:04:46.548 killing process with pid 54473 00:04:46.548 11:08:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:46.548 11:08:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:46.548 11:08:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54473' 00:04:46.548 11:08:27 -- common/autotest_common.sh@945 -- # kill 54473 00:04:46.548 11:08:27 -- common/autotest_common.sh@950 -- # wait 54473 00:04:46.807 ************************************ 00:04:46.807 END TEST alias_rpc 00:04:46.807 ************************************ 00:04:46.807 00:04:46.807 real 0m1.750s 00:04:46.807 user 0m2.143s 00:04:46.807 sys 0m0.330s 00:04:46.807 11:08:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.807 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:04:46.807 11:08:28 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:46.807 11:08:28 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:46.807 11:08:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.807 11:08:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.807 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:04:46.807 ************************************ 00:04:46.807 START TEST spdkcli_tcp 00:04:46.807 ************************************ 00:04:46.807 11:08:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:46.807 * Looking for test storage... 00:04:46.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:46.807 11:08:28 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:46.807 11:08:28 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:46.807 11:08:28 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:46.807 11:08:28 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:46.807 11:08:28 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:46.807 11:08:28 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:46.807 11:08:28 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:46.807 11:08:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:46.807 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:04:46.807 11:08:28 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=54548 00:04:46.807 11:08:28 -- spdkcli/tcp.sh@27 -- # waitforlisten 54548 00:04:46.807 11:08:28 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:46.807 11:08:28 -- common/autotest_common.sh@819 -- # '[' -z 54548 ']' 00:04:46.807 11:08:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.807 11:08:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:46.807 11:08:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.807 11:08:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:46.807 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:04:47.066 [2024-10-13 11:08:28.439670] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:47.067 [2024-10-13 11:08:28.439851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54548 ] 00:04:47.067 [2024-10-13 11:08:28.580745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.067 [2024-10-13 11:08:28.639602] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:47.067 [2024-10-13 11:08:28.640151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.067 [2024-10-13 11:08:28.640162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.003 11:08:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:48.003 11:08:29 -- common/autotest_common.sh@852 -- # return 0 00:04:48.003 11:08:29 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:48.003 11:08:29 -- spdkcli/tcp.sh@31 -- # socat_pid=54565 00:04:48.003 11:08:29 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:48.263 [ 00:04:48.263 "bdev_malloc_delete", 00:04:48.263 "bdev_malloc_create", 00:04:48.263 "bdev_null_resize", 00:04:48.263 "bdev_null_delete", 00:04:48.263 "bdev_null_create", 00:04:48.263 "bdev_nvme_cuse_unregister", 00:04:48.263 "bdev_nvme_cuse_register", 00:04:48.263 "bdev_opal_new_user", 00:04:48.263 "bdev_opal_set_lock_state", 00:04:48.263 "bdev_opal_delete", 00:04:48.263 "bdev_opal_get_info", 00:04:48.263 "bdev_opal_create", 00:04:48.263 "bdev_nvme_opal_revert", 00:04:48.263 "bdev_nvme_opal_init", 00:04:48.263 "bdev_nvme_send_cmd", 00:04:48.263 "bdev_nvme_get_path_iostat", 00:04:48.263 "bdev_nvme_get_mdns_discovery_info", 00:04:48.263 "bdev_nvme_stop_mdns_discovery", 00:04:48.263 "bdev_nvme_start_mdns_discovery", 00:04:48.263 "bdev_nvme_set_multipath_policy", 00:04:48.263 "bdev_nvme_set_preferred_path", 00:04:48.263 "bdev_nvme_get_io_paths", 00:04:48.263 "bdev_nvme_remove_error_injection", 00:04:48.263 "bdev_nvme_add_error_injection", 00:04:48.263 "bdev_nvme_get_discovery_info", 00:04:48.263 "bdev_nvme_stop_discovery", 00:04:48.263 "bdev_nvme_start_discovery", 00:04:48.263 "bdev_nvme_get_controller_health_info", 00:04:48.263 "bdev_nvme_disable_controller", 00:04:48.263 "bdev_nvme_enable_controller", 00:04:48.263 "bdev_nvme_reset_controller", 00:04:48.263 "bdev_nvme_get_transport_statistics", 00:04:48.263 "bdev_nvme_apply_firmware", 00:04:48.263 "bdev_nvme_detach_controller", 00:04:48.263 "bdev_nvme_get_controllers", 00:04:48.263 "bdev_nvme_attach_controller", 00:04:48.263 "bdev_nvme_set_hotplug", 00:04:48.263 "bdev_nvme_set_options", 00:04:48.263 "bdev_passthru_delete", 00:04:48.263 "bdev_passthru_create", 00:04:48.263 "bdev_lvol_grow_lvstore", 00:04:48.263 "bdev_lvol_get_lvols", 00:04:48.263 "bdev_lvol_get_lvstores", 00:04:48.263 "bdev_lvol_delete", 00:04:48.263 "bdev_lvol_set_read_only", 00:04:48.263 "bdev_lvol_resize", 00:04:48.263 "bdev_lvol_decouple_parent", 00:04:48.263 "bdev_lvol_inflate", 00:04:48.263 "bdev_lvol_rename", 00:04:48.263 "bdev_lvol_clone_bdev", 00:04:48.263 "bdev_lvol_clone", 00:04:48.263 "bdev_lvol_snapshot", 00:04:48.263 "bdev_lvol_create", 00:04:48.263 "bdev_lvol_delete_lvstore", 00:04:48.263 "bdev_lvol_rename_lvstore", 00:04:48.263 "bdev_lvol_create_lvstore", 00:04:48.263 "bdev_raid_set_options", 00:04:48.263 "bdev_raid_remove_base_bdev", 00:04:48.263 "bdev_raid_add_base_bdev", 00:04:48.263 "bdev_raid_delete", 00:04:48.263 "bdev_raid_create", 00:04:48.263 "bdev_raid_get_bdevs", 00:04:48.263 "bdev_error_inject_error", 00:04:48.263 "bdev_error_delete", 00:04:48.263 "bdev_error_create", 00:04:48.263 "bdev_split_delete", 00:04:48.263 "bdev_split_create", 00:04:48.263 "bdev_delay_delete", 00:04:48.263 "bdev_delay_create", 00:04:48.263 "bdev_delay_update_latency", 00:04:48.263 "bdev_zone_block_delete", 00:04:48.263 "bdev_zone_block_create", 00:04:48.263 "blobfs_create", 00:04:48.263 "blobfs_detect", 00:04:48.263 "blobfs_set_cache_size", 00:04:48.263 "bdev_aio_delete", 00:04:48.263 "bdev_aio_rescan", 00:04:48.263 "bdev_aio_create", 00:04:48.263 "bdev_ftl_set_property", 00:04:48.263 "bdev_ftl_get_properties", 00:04:48.263 "bdev_ftl_get_stats", 00:04:48.263 "bdev_ftl_unmap", 00:04:48.263 "bdev_ftl_unload", 00:04:48.263 "bdev_ftl_delete", 00:04:48.263 "bdev_ftl_load", 00:04:48.263 "bdev_ftl_create", 00:04:48.263 "bdev_virtio_attach_controller", 00:04:48.263 "bdev_virtio_scsi_get_devices", 00:04:48.263 "bdev_virtio_detach_controller", 00:04:48.263 "bdev_virtio_blk_set_hotplug", 00:04:48.263 "bdev_iscsi_delete", 00:04:48.263 "bdev_iscsi_create", 00:04:48.263 "bdev_iscsi_set_options", 00:04:48.263 "bdev_uring_delete", 00:04:48.264 "bdev_uring_create", 00:04:48.264 "accel_error_inject_error", 00:04:48.264 "ioat_scan_accel_module", 00:04:48.264 "dsa_scan_accel_module", 00:04:48.264 "iaa_scan_accel_module", 00:04:48.264 "vfu_virtio_create_scsi_endpoint", 00:04:48.264 "vfu_virtio_scsi_remove_target", 00:04:48.264 "vfu_virtio_scsi_add_target", 00:04:48.264 "vfu_virtio_create_blk_endpoint", 00:04:48.264 "vfu_virtio_delete_endpoint", 00:04:48.264 "iscsi_set_options", 00:04:48.264 "iscsi_get_auth_groups", 00:04:48.264 "iscsi_auth_group_remove_secret", 00:04:48.264 "iscsi_auth_group_add_secret", 00:04:48.264 "iscsi_delete_auth_group", 00:04:48.264 "iscsi_create_auth_group", 00:04:48.264 "iscsi_set_discovery_auth", 00:04:48.264 "iscsi_get_options", 00:04:48.264 "iscsi_target_node_request_logout", 00:04:48.264 "iscsi_target_node_set_redirect", 00:04:48.264 "iscsi_target_node_set_auth", 00:04:48.264 "iscsi_target_node_add_lun", 00:04:48.264 "iscsi_get_connections", 00:04:48.264 "iscsi_portal_group_set_auth", 00:04:48.264 "iscsi_start_portal_group", 00:04:48.264 "iscsi_delete_portal_group", 00:04:48.264 "iscsi_create_portal_group", 00:04:48.264 "iscsi_get_portal_groups", 00:04:48.264 "iscsi_delete_target_node", 00:04:48.264 "iscsi_target_node_remove_pg_ig_maps", 00:04:48.264 "iscsi_target_node_add_pg_ig_maps", 00:04:48.264 "iscsi_create_target_node", 00:04:48.264 "iscsi_get_target_nodes", 00:04:48.264 "iscsi_delete_initiator_group", 00:04:48.264 "iscsi_initiator_group_remove_initiators", 00:04:48.264 "iscsi_initiator_group_add_initiators", 00:04:48.264 "iscsi_create_initiator_group", 00:04:48.264 "iscsi_get_initiator_groups", 00:04:48.264 "nvmf_set_crdt", 00:04:48.264 "nvmf_set_config", 00:04:48.264 "nvmf_set_max_subsystems", 00:04:48.264 "nvmf_subsystem_get_listeners", 00:04:48.264 "nvmf_subsystem_get_qpairs", 00:04:48.264 "nvmf_subsystem_get_controllers", 00:04:48.264 "nvmf_get_stats", 00:04:48.264 "nvmf_get_transports", 00:04:48.264 "nvmf_create_transport", 00:04:48.264 "nvmf_get_targets", 00:04:48.264 "nvmf_delete_target", 00:04:48.264 "nvmf_create_target", 00:04:48.264 "nvmf_subsystem_allow_any_host", 00:04:48.264 "nvmf_subsystem_remove_host", 00:04:48.264 "nvmf_subsystem_add_host", 00:04:48.264 "nvmf_subsystem_remove_ns", 00:04:48.264 "nvmf_subsystem_add_ns", 00:04:48.264 "nvmf_subsystem_listener_set_ana_state", 00:04:48.264 "nvmf_discovery_get_referrals", 00:04:48.264 "nvmf_discovery_remove_referral", 00:04:48.264 "nvmf_discovery_add_referral", 00:04:48.264 "nvmf_subsystem_remove_listener", 00:04:48.264 "nvmf_subsystem_add_listener", 00:04:48.264 "nvmf_delete_subsystem", 00:04:48.264 "nvmf_create_subsystem", 00:04:48.264 "nvmf_get_subsystems", 00:04:48.264 "env_dpdk_get_mem_stats", 00:04:48.264 "nbd_get_disks", 00:04:48.264 "nbd_stop_disk", 00:04:48.264 "nbd_start_disk", 00:04:48.264 "ublk_recover_disk", 00:04:48.264 "ublk_get_disks", 00:04:48.264 "ublk_stop_disk", 00:04:48.264 "ublk_start_disk", 00:04:48.264 "ublk_destroy_target", 00:04:48.264 "ublk_create_target", 00:04:48.264 "virtio_blk_create_transport", 00:04:48.264 "virtio_blk_get_transports", 00:04:48.264 "vhost_controller_set_coalescing", 00:04:48.264 "vhost_get_controllers", 00:04:48.264 "vhost_delete_controller", 00:04:48.264 "vhost_create_blk_controller", 00:04:48.264 "vhost_scsi_controller_remove_target", 00:04:48.264 "vhost_scsi_controller_add_target", 00:04:48.264 "vhost_start_scsi_controller", 00:04:48.264 "vhost_create_scsi_controller", 00:04:48.264 "thread_set_cpumask", 00:04:48.264 "framework_get_scheduler", 00:04:48.264 "framework_set_scheduler", 00:04:48.264 "framework_get_reactors", 00:04:48.264 "thread_get_io_channels", 00:04:48.264 "thread_get_pollers", 00:04:48.264 "thread_get_stats", 00:04:48.264 "framework_monitor_context_switch", 00:04:48.264 "spdk_kill_instance", 00:04:48.264 "log_enable_timestamps", 00:04:48.264 "log_get_flags", 00:04:48.264 "log_clear_flag", 00:04:48.264 "log_set_flag", 00:04:48.264 "log_get_level", 00:04:48.264 "log_set_level", 00:04:48.264 "log_get_print_level", 00:04:48.264 "log_set_print_level", 00:04:48.264 "framework_enable_cpumask_locks", 00:04:48.264 "framework_disable_cpumask_locks", 00:04:48.264 "framework_wait_init", 00:04:48.264 "framework_start_init", 00:04:48.264 "scsi_get_devices", 00:04:48.264 "bdev_get_histogram", 00:04:48.264 "bdev_enable_histogram", 00:04:48.264 "bdev_set_qos_limit", 00:04:48.264 "bdev_set_qd_sampling_period", 00:04:48.264 "bdev_get_bdevs", 00:04:48.264 "bdev_reset_iostat", 00:04:48.264 "bdev_get_iostat", 00:04:48.264 "bdev_examine", 00:04:48.264 "bdev_wait_for_examine", 00:04:48.264 "bdev_set_options", 00:04:48.264 "notify_get_notifications", 00:04:48.264 "notify_get_types", 00:04:48.264 "accel_get_stats", 00:04:48.264 "accel_set_options", 00:04:48.264 "accel_set_driver", 00:04:48.264 "accel_crypto_key_destroy", 00:04:48.264 "accel_crypto_keys_get", 00:04:48.264 "accel_crypto_key_create", 00:04:48.264 "accel_assign_opc", 00:04:48.264 "accel_get_module_info", 00:04:48.264 "accel_get_opc_assignments", 00:04:48.264 "vmd_rescan", 00:04:48.264 "vmd_remove_device", 00:04:48.264 "vmd_enable", 00:04:48.264 "sock_set_default_impl", 00:04:48.264 "sock_impl_set_options", 00:04:48.264 "sock_impl_get_options", 00:04:48.264 "iobuf_get_stats", 00:04:48.264 "iobuf_set_options", 00:04:48.264 "framework_get_pci_devices", 00:04:48.264 "framework_get_config", 00:04:48.264 "framework_get_subsystems", 00:04:48.264 "vfu_tgt_set_base_path", 00:04:48.264 "trace_get_info", 00:04:48.264 "trace_get_tpoint_group_mask", 00:04:48.264 "trace_disable_tpoint_group", 00:04:48.264 "trace_enable_tpoint_group", 00:04:48.264 "trace_clear_tpoint_mask", 00:04:48.264 "trace_set_tpoint_mask", 00:04:48.264 "spdk_get_version", 00:04:48.264 "rpc_get_methods" 00:04:48.264 ] 00:04:48.264 11:08:29 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:48.264 11:08:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:48.264 11:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:48.264 11:08:29 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:48.264 11:08:29 -- spdkcli/tcp.sh@38 -- # killprocess 54548 00:04:48.264 11:08:29 -- common/autotest_common.sh@926 -- # '[' -z 54548 ']' 00:04:48.264 11:08:29 -- common/autotest_common.sh@930 -- # kill -0 54548 00:04:48.264 11:08:29 -- common/autotest_common.sh@931 -- # uname 00:04:48.264 11:08:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:48.264 11:08:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54548 00:04:48.264 killing process with pid 54548 00:04:48.264 11:08:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:48.264 11:08:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:48.264 11:08:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54548' 00:04:48.264 11:08:29 -- common/autotest_common.sh@945 -- # kill 54548 00:04:48.264 11:08:29 -- common/autotest_common.sh@950 -- # wait 54548 00:04:48.523 ************************************ 00:04:48.523 END TEST spdkcli_tcp 00:04:48.523 ************************************ 00:04:48.523 00:04:48.523 real 0m1.764s 00:04:48.523 user 0m3.405s 00:04:48.523 sys 0m0.374s 00:04:48.523 11:08:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.523 11:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:48.523 11:08:30 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:48.523 11:08:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.523 11:08:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.523 11:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:48.523 ************************************ 00:04:48.523 START TEST dpdk_mem_utility 00:04:48.523 ************************************ 00:04:48.523 11:08:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:48.798 * Looking for test storage... 00:04:48.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:48.798 11:08:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:48.798 11:08:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=54638 00:04:48.798 11:08:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 54638 00:04:48.798 11:08:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.798 11:08:30 -- common/autotest_common.sh@819 -- # '[' -z 54638 ']' 00:04:48.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.798 11:08:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.798 11:08:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:48.798 11:08:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.798 11:08:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:48.798 11:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:48.798 [2024-10-13 11:08:30.227964] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:48.798 [2024-10-13 11:08:30.228075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54638 ] 00:04:48.798 [2024-10-13 11:08:30.360261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.058 [2024-10-13 11:08:30.418040] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:49.058 [2024-10-13 11:08:30.418220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.625 11:08:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:49.625 11:08:31 -- common/autotest_common.sh@852 -- # return 0 00:04:49.625 11:08:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:49.625 11:08:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:49.625 11:08:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.625 11:08:31 -- common/autotest_common.sh@10 -- # set +x 00:04:49.625 { 00:04:49.625 "filename": "/tmp/spdk_mem_dump.txt" 00:04:49.625 } 00:04:49.625 11:08:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.625 11:08:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:49.886 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:49.886 1 heaps totaling size 814.000000 MiB 00:04:49.886 size: 814.000000 MiB heap id: 0 00:04:49.886 end heaps---------- 00:04:49.886 8 mempools totaling size 598.116089 MiB 00:04:49.886 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:49.886 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:49.886 size: 84.521057 MiB name: bdev_io_54638 00:04:49.886 size: 51.011292 MiB name: evtpool_54638 00:04:49.886 size: 50.003479 MiB name: msgpool_54638 00:04:49.886 size: 21.763794 MiB name: PDU_Pool 00:04:49.886 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:49.886 size: 0.026123 MiB name: Session_Pool 00:04:49.886 end mempools------- 00:04:49.886 6 memzones totaling size 4.142822 MiB 00:04:49.886 size: 1.000366 MiB name: RG_ring_0_54638 00:04:49.886 size: 1.000366 MiB name: RG_ring_1_54638 00:04:49.886 size: 1.000366 MiB name: RG_ring_4_54638 00:04:49.886 size: 1.000366 MiB name: RG_ring_5_54638 00:04:49.886 size: 0.125366 MiB name: RG_ring_2_54638 00:04:49.886 size: 0.015991 MiB name: RG_ring_3_54638 00:04:49.886 end memzones------- 00:04:49.886 11:08:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:49.886 heap id: 0 total size: 814.000000 MiB number of busy elements: 303 number of free elements: 15 00:04:49.886 list of free elements. size: 12.471375 MiB 00:04:49.886 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:49.886 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:49.886 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:49.886 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:49.886 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:49.886 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:49.886 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:49.886 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:49.886 element at address: 0x200000200000 with size: 0.832825 MiB 00:04:49.886 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:04:49.886 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:49.886 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:49.886 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:49.886 element at address: 0x200027e00000 with size: 0.396484 MiB 00:04:49.886 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:49.886 list of standard malloc elements. size: 199.266052 MiB 00:04:49.886 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:49.886 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:49.886 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:49.886 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:49.886 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:49.886 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:49.886 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:49.886 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:49.886 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:49.886 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:49.886 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:49.887 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:49.888 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e65800 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e658c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6c4c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:49.888 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:49.888 list of memzone associated elements. size: 602.262573 MiB 00:04:49.888 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:49.888 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:49.888 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:49.888 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:49.888 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:49.888 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_54638_0 00:04:49.888 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:49.888 associated memzone info: size: 48.002930 MiB name: MP_evtpool_54638_0 00:04:49.888 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:49.888 associated memzone info: size: 48.002930 MiB name: MP_msgpool_54638_0 00:04:49.888 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:49.888 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:49.888 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:49.888 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:49.888 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:49.888 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_54638 00:04:49.888 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:49.888 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_54638 00:04:49.888 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:49.888 associated memzone info: size: 1.007996 MiB name: MP_evtpool_54638 00:04:49.888 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:49.888 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:49.888 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:49.888 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:49.888 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:49.888 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:49.888 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:49.888 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:49.888 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:49.888 associated memzone info: size: 1.000366 MiB name: RG_ring_0_54638 00:04:49.888 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:49.888 associated memzone info: size: 1.000366 MiB name: RG_ring_1_54638 00:04:49.888 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:49.888 associated memzone info: size: 1.000366 MiB name: RG_ring_4_54638 00:04:49.888 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:49.888 associated memzone info: size: 1.000366 MiB name: RG_ring_5_54638 00:04:49.888 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:49.888 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_54638 00:04:49.888 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:49.889 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:49.889 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:49.889 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:49.889 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:49.889 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:49.889 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:49.889 associated memzone info: size: 0.125366 MiB name: RG_ring_2_54638 00:04:49.889 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:49.889 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:49.889 element at address: 0x200027e65980 with size: 0.023743 MiB 00:04:49.889 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:49.889 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:49.889 associated memzone info: size: 0.015991 MiB name: RG_ring_3_54638 00:04:49.889 element at address: 0x200027e6bac0 with size: 0.002441 MiB 00:04:49.889 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:49.889 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:49.889 associated memzone info: size: 0.000183 MiB name: MP_msgpool_54638 00:04:49.889 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:49.889 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_54638 00:04:49.889 element at address: 0x200027e6c580 with size: 0.000305 MiB 00:04:49.889 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:49.889 11:08:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:49.889 11:08:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 54638 00:04:49.889 11:08:31 -- common/autotest_common.sh@926 -- # '[' -z 54638 ']' 00:04:49.889 11:08:31 -- common/autotest_common.sh@930 -- # kill -0 54638 00:04:49.889 11:08:31 -- common/autotest_common.sh@931 -- # uname 00:04:49.889 11:08:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:49.889 11:08:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54638 00:04:49.889 11:08:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:49.889 killing process with pid 54638 00:04:49.889 11:08:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:49.889 11:08:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54638' 00:04:49.889 11:08:31 -- common/autotest_common.sh@945 -- # kill 54638 00:04:49.889 11:08:31 -- common/autotest_common.sh@950 -- # wait 54638 00:04:50.148 00:04:50.148 real 0m1.488s 00:04:50.148 user 0m1.727s 00:04:50.148 sys 0m0.286s 00:04:50.148 ************************************ 00:04:50.148 END TEST dpdk_mem_utility 00:04:50.148 ************************************ 00:04:50.148 11:08:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.148 11:08:31 -- common/autotest_common.sh@10 -- # set +x 00:04:50.148 11:08:31 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:50.148 11:08:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.148 11:08:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.148 11:08:31 -- common/autotest_common.sh@10 -- # set +x 00:04:50.148 ************************************ 00:04:50.148 START TEST event 00:04:50.148 ************************************ 00:04:50.148 11:08:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:50.148 * Looking for test storage... 00:04:50.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:50.148 11:08:31 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:50.148 11:08:31 -- bdev/nbd_common.sh@6 -- # set -e 00:04:50.148 11:08:31 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:50.148 11:08:31 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:50.148 11:08:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.148 11:08:31 -- common/autotest_common.sh@10 -- # set +x 00:04:50.148 ************************************ 00:04:50.148 START TEST event_perf 00:04:50.148 ************************************ 00:04:50.148 11:08:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:50.148 Running I/O for 1 seconds...[2024-10-13 11:08:31.743921] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:50.148 [2024-10-13 11:08:31.744012] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54703 ] 00:04:50.407 [2024-10-13 11:08:31.882608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:50.407 [2024-10-13 11:08:31.955925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.407 [2024-10-13 11:08:31.956064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.407 Running I/O for 1 seconds...[2024-10-13 11:08:31.956900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.407 [2024-10-13 11:08:31.956980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.827 00:04:51.827 lcore 0: 196808 00:04:51.827 lcore 1: 196810 00:04:51.827 lcore 2: 196813 00:04:51.827 lcore 3: 196815 00:04:51.827 done. 00:04:51.827 00:04:51.827 real 0m1.329s 00:04:51.827 user 0m4.150s 00:04:51.827 sys 0m0.052s 00:04:51.827 11:08:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.827 ************************************ 00:04:51.827 END TEST event_perf 00:04:51.827 ************************************ 00:04:51.827 11:08:33 -- common/autotest_common.sh@10 -- # set +x 00:04:51.827 11:08:33 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:51.827 11:08:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:51.827 11:08:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.827 11:08:33 -- common/autotest_common.sh@10 -- # set +x 00:04:51.827 ************************************ 00:04:51.827 START TEST event_reactor 00:04:51.827 ************************************ 00:04:51.827 11:08:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:51.827 [2024-10-13 11:08:33.130518] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:51.827 [2024-10-13 11:08:33.130623] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54747 ] 00:04:51.827 [2024-10-13 11:08:33.262077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.827 [2024-10-13 11:08:33.310851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.204 test_start 00:04:53.204 oneshot 00:04:53.204 tick 100 00:04:53.204 tick 100 00:04:53.204 tick 250 00:04:53.204 tick 100 00:04:53.204 tick 100 00:04:53.204 tick 250 00:04:53.204 tick 500 00:04:53.204 tick 100 00:04:53.204 tick 100 00:04:53.204 tick 100 00:04:53.204 tick 250 00:04:53.204 tick 100 00:04:53.204 tick 100 00:04:53.204 test_end 00:04:53.204 00:04:53.204 real 0m1.293s 00:04:53.204 user 0m1.140s 00:04:53.204 sys 0m0.044s 00:04:53.204 11:08:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.204 ************************************ 00:04:53.204 END TEST event_reactor 00:04:53.204 ************************************ 00:04:53.204 11:08:34 -- common/autotest_common.sh@10 -- # set +x 00:04:53.204 11:08:34 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.204 11:08:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:53.204 11:08:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.204 11:08:34 -- common/autotest_common.sh@10 -- # set +x 00:04:53.204 ************************************ 00:04:53.204 START TEST event_reactor_perf 00:04:53.204 ************************************ 00:04:53.204 11:08:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.204 [2024-10-13 11:08:34.472157] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:53.204 [2024-10-13 11:08:34.472274] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54777 ] 00:04:53.204 [2024-10-13 11:08:34.601548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.204 [2024-10-13 11:08:34.651657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.582 test_start 00:04:54.582 test_end 00:04:54.582 Performance: 416069 events per second 00:04:54.582 00:04:54.582 real 0m1.286s 00:04:54.582 user 0m1.141s 00:04:54.582 sys 0m0.035s 00:04:54.582 11:08:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.582 11:08:35 -- common/autotest_common.sh@10 -- # set +x 00:04:54.582 ************************************ 00:04:54.582 END TEST event_reactor_perf 00:04:54.582 ************************************ 00:04:54.582 11:08:35 -- event/event.sh@49 -- # uname -s 00:04:54.582 11:08:35 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:54.582 11:08:35 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:54.582 11:08:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.582 11:08:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.582 11:08:35 -- common/autotest_common.sh@10 -- # set +x 00:04:54.582 ************************************ 00:04:54.582 START TEST event_scheduler 00:04:54.582 ************************************ 00:04:54.582 11:08:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:54.582 * Looking for test storage... 00:04:54.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:54.582 11:08:35 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:54.582 11:08:35 -- scheduler/scheduler.sh@35 -- # scheduler_pid=54838 00:04:54.582 11:08:35 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.582 11:08:35 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:54.582 11:08:35 -- scheduler/scheduler.sh@37 -- # waitforlisten 54838 00:04:54.582 11:08:35 -- common/autotest_common.sh@819 -- # '[' -z 54838 ']' 00:04:54.582 11:08:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.582 11:08:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:54.582 11:08:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.582 11:08:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:54.582 11:08:35 -- common/autotest_common.sh@10 -- # set +x 00:04:54.582 [2024-10-13 11:08:35.929534] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:54.582 [2024-10-13 11:08:35.929658] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54838 ] 00:04:54.582 [2024-10-13 11:08:36.071295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:54.582 [2024-10-13 11:08:36.142592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.582 [2024-10-13 11:08:36.142738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.582 [2024-10-13 11:08:36.143819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.582 [2024-10-13 11:08:36.143870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.518 11:08:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:55.518 11:08:36 -- common/autotest_common.sh@852 -- # return 0 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:55.518 11:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 POWER: Env isn't set yet! 00:04:55.518 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:55.518 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:55.518 POWER: Cannot set governor of lcore 0 to userspace 00:04:55.518 POWER: Attempting to initialise PSTAT power management... 00:04:55.518 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:55.518 POWER: Cannot set governor of lcore 0 to performance 00:04:55.518 POWER: Attempting to initialise AMD PSTATE power management... 00:04:55.518 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:55.518 POWER: Cannot set governor of lcore 0 to userspace 00:04:55.518 POWER: Attempting to initialise CPPC power management... 00:04:55.518 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:55.518 POWER: Cannot set governor of lcore 0 to userspace 00:04:55.518 POWER: Attempting to initialise VM power management... 00:04:55.518 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:55.518 POWER: Unable to set Power Management Environment for lcore 0 00:04:55.518 [2024-10-13 11:08:36.820901] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:55.518 [2024-10-13 11:08:36.820914] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:55.518 [2024-10-13 11:08:36.820921] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:55.518 [2024-10-13 11:08:36.820933] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:55.518 [2024-10-13 11:08:36.820940] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:55.518 [2024-10-13 11:08:36.820947] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:55.518 11:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:55.518 11:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 [2024-10-13 11:08:36.877834] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:55.518 11:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:55.518 11:08:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.518 11:08:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 ************************************ 00:04:55.518 START TEST scheduler_create_thread 00:04:55.518 ************************************ 00:04:55.518 11:08:36 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:55.518 11:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 2 00:04:55.518 11:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:55.518 11:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 3 00:04:55.518 11:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:55.518 11:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 4 00:04:55.518 11:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:55.518 11:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 5 00:04:55.518 11:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:55.518 11:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 6 00:04:55.518 11:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:55.518 11:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 7 00:04:55.518 11:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:55.518 11:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 8 00:04:55.518 11:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:55.518 11:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 9 00:04:55.518 11:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:55.518 11:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 10 00:04:55.518 11:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:55.518 11:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 11:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:55.518 11:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 11:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.518 11:08:36 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:55.518 11:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.518 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:56.894 11:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.895 11:08:38 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:56.895 11:08:38 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:56.895 11:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.895 11:08:38 -- common/autotest_common.sh@10 -- # set +x 00:04:58.273 11:08:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:58.273 00:04:58.273 real 0m2.614s 00:04:58.273 user 0m0.016s 00:04:58.273 sys 0m0.006s 00:04:58.273 11:08:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.273 ************************************ 00:04:58.273 END TEST scheduler_create_thread 00:04:58.273 ************************************ 00:04:58.273 11:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:58.273 11:08:39 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:58.273 11:08:39 -- scheduler/scheduler.sh@46 -- # killprocess 54838 00:04:58.273 11:08:39 -- common/autotest_common.sh@926 -- # '[' -z 54838 ']' 00:04:58.273 11:08:39 -- common/autotest_common.sh@930 -- # kill -0 54838 00:04:58.273 11:08:39 -- common/autotest_common.sh@931 -- # uname 00:04:58.273 11:08:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:58.273 11:08:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54838 00:04:58.273 11:08:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:04:58.273 killing process with pid 54838 00:04:58.273 11:08:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:04:58.273 11:08:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54838' 00:04:58.273 11:08:39 -- common/autotest_common.sh@945 -- # kill 54838 00:04:58.273 11:08:39 -- common/autotest_common.sh@950 -- # wait 54838 00:04:58.532 [2024-10-13 11:08:39.984845] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:58.792 00:04:58.792 real 0m4.387s 00:04:58.792 user 0m8.221s 00:04:58.792 sys 0m0.289s 00:04:58.792 11:08:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.792 11:08:40 -- common/autotest_common.sh@10 -- # set +x 00:04:58.792 ************************************ 00:04:58.792 END TEST event_scheduler 00:04:58.792 ************************************ 00:04:58.792 11:08:40 -- event/event.sh@51 -- # modprobe -n nbd 00:04:58.792 11:08:40 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:58.792 11:08:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.792 11:08:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.792 11:08:40 -- common/autotest_common.sh@10 -- # set +x 00:04:58.792 ************************************ 00:04:58.792 START TEST app_repeat 00:04:58.792 ************************************ 00:04:58.792 11:08:40 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:04:58.792 11:08:40 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.792 11:08:40 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.792 11:08:40 -- event/event.sh@13 -- # local nbd_list 00:04:58.792 11:08:40 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.792 11:08:40 -- event/event.sh@14 -- # local bdev_list 00:04:58.792 11:08:40 -- event/event.sh@15 -- # local repeat_times=4 00:04:58.792 11:08:40 -- event/event.sh@17 -- # modprobe nbd 00:04:58.792 11:08:40 -- event/event.sh@19 -- # repeat_pid=54937 00:04:58.792 Process app_repeat pid: 54937 00:04:58.792 11:08:40 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.792 11:08:40 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:58.792 11:08:40 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 54937' 00:04:58.792 spdk_app_start Round 0 00:04:58.792 11:08:40 -- event/event.sh@23 -- # for i in {0..2} 00:04:58.792 11:08:40 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:58.792 11:08:40 -- event/event.sh@25 -- # waitforlisten 54937 /var/tmp/spdk-nbd.sock 00:04:58.792 11:08:40 -- common/autotest_common.sh@819 -- # '[' -z 54937 ']' 00:04:58.792 11:08:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.792 11:08:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:58.792 11:08:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.792 11:08:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:58.792 11:08:40 -- common/autotest_common.sh@10 -- # set +x 00:04:58.792 [2024-10-13 11:08:40.277638] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:58.792 [2024-10-13 11:08:40.277801] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54937 ] 00:04:59.051 [2024-10-13 11:08:40.409247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.051 [2024-10-13 11:08:40.460700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.051 [2024-10-13 11:08:40.460708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.989 11:08:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:59.989 11:08:41 -- common/autotest_common.sh@852 -- # return 0 00:04:59.989 11:08:41 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.989 Malloc0 00:04:59.989 11:08:41 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.558 Malloc1 00:05:00.558 11:08:41 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@12 -- # local i 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.558 11:08:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.817 /dev/nbd0 00:05:00.817 11:08:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.817 11:08:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.817 11:08:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:00.817 11:08:42 -- common/autotest_common.sh@857 -- # local i 00:05:00.817 11:08:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:00.817 11:08:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:00.817 11:08:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:00.817 11:08:42 -- common/autotest_common.sh@861 -- # break 00:05:00.817 11:08:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:00.817 11:08:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:00.817 11:08:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.817 1+0 records in 00:05:00.817 1+0 records out 00:05:00.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356794 s, 11.5 MB/s 00:05:00.817 11:08:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.817 11:08:42 -- common/autotest_common.sh@874 -- # size=4096 00:05:00.817 11:08:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.817 11:08:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:00.817 11:08:42 -- common/autotest_common.sh@877 -- # return 0 00:05:00.817 11:08:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.817 11:08:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.817 11:08:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.077 /dev/nbd1 00:05:01.077 11:08:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.077 11:08:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.077 11:08:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:01.077 11:08:42 -- common/autotest_common.sh@857 -- # local i 00:05:01.077 11:08:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:01.077 11:08:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:01.077 11:08:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:01.077 11:08:42 -- common/autotest_common.sh@861 -- # break 00:05:01.077 11:08:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:01.077 11:08:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:01.077 11:08:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.077 1+0 records in 00:05:01.077 1+0 records out 00:05:01.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301439 s, 13.6 MB/s 00:05:01.077 11:08:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.077 11:08:42 -- common/autotest_common.sh@874 -- # size=4096 00:05:01.077 11:08:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.077 11:08:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:01.077 11:08:42 -- common/autotest_common.sh@877 -- # return 0 00:05:01.077 11:08:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.077 11:08:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.077 11:08:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.077 11:08:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.077 11:08:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.336 { 00:05:01.336 "nbd_device": "/dev/nbd0", 00:05:01.336 "bdev_name": "Malloc0" 00:05:01.336 }, 00:05:01.336 { 00:05:01.336 "nbd_device": "/dev/nbd1", 00:05:01.336 "bdev_name": "Malloc1" 00:05:01.336 } 00:05:01.336 ]' 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.336 { 00:05:01.336 "nbd_device": "/dev/nbd0", 00:05:01.336 "bdev_name": "Malloc0" 00:05:01.336 }, 00:05:01.336 { 00:05:01.336 "nbd_device": "/dev/nbd1", 00:05:01.336 "bdev_name": "Malloc1" 00:05:01.336 } 00:05:01.336 ]' 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.336 /dev/nbd1' 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.336 /dev/nbd1' 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.336 256+0 records in 00:05:01.336 256+0 records out 00:05:01.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00775868 s, 135 MB/s 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.336 256+0 records in 00:05:01.336 256+0 records out 00:05:01.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236557 s, 44.3 MB/s 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.336 11:08:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.595 256+0 records in 00:05:01.595 256+0 records out 00:05:01.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260047 s, 40.3 MB/s 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@51 -- # local i 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.595 11:08:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.854 11:08:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.854 11:08:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.854 11:08:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.854 11:08:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.854 11:08:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.854 11:08:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.854 11:08:43 -- bdev/nbd_common.sh@41 -- # break 00:05:01.854 11:08:43 -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.854 11:08:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.854 11:08:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:02.113 11:08:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:02.113 11:08:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:02.113 11:08:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:02.113 11:08:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.113 11:08:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.113 11:08:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:02.113 11:08:43 -- bdev/nbd_common.sh@41 -- # break 00:05:02.113 11:08:43 -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.113 11:08:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.113 11:08:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.113 11:08:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.371 11:08:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.371 11:08:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.371 11:08:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.371 11:08:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.371 11:08:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.371 11:08:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.371 11:08:43 -- bdev/nbd_common.sh@65 -- # true 00:05:02.371 11:08:43 -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.371 11:08:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.371 11:08:43 -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.371 11:08:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.371 11:08:43 -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.371 11:08:43 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.630 11:08:44 -- event/event.sh@35 -- # sleep 3 00:05:02.889 [2024-10-13 11:08:44.311670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.889 [2024-10-13 11:08:44.365375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.889 [2024-10-13 11:08:44.365378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.889 [2024-10-13 11:08:44.399003] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.889 [2024-10-13 11:08:44.399254] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.182 spdk_app_start Round 1 00:05:06.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.183 11:08:47 -- event/event.sh@23 -- # for i in {0..2} 00:05:06.183 11:08:47 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:06.183 11:08:47 -- event/event.sh@25 -- # waitforlisten 54937 /var/tmp/spdk-nbd.sock 00:05:06.183 11:08:47 -- common/autotest_common.sh@819 -- # '[' -z 54937 ']' 00:05:06.183 11:08:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.183 11:08:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:06.183 11:08:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.183 11:08:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:06.183 11:08:47 -- common/autotest_common.sh@10 -- # set +x 00:05:06.183 11:08:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:06.183 11:08:47 -- common/autotest_common.sh@852 -- # return 0 00:05:06.183 11:08:47 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.183 Malloc0 00:05:06.183 11:08:47 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.451 Malloc1 00:05:06.451 11:08:48 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.451 11:08:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.451 11:08:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.451 11:08:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.451 11:08:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.451 11:08:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.451 11:08:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.451 11:08:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.451 11:08:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.451 11:08:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.451 11:08:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.451 11:08:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.451 11:08:48 -- bdev/nbd_common.sh@12 -- # local i 00:05:06.451 11:08:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.452 11:08:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.452 11:08:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.717 /dev/nbd0 00:05:06.717 11:08:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.717 11:08:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.717 11:08:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:06.717 11:08:48 -- common/autotest_common.sh@857 -- # local i 00:05:06.717 11:08:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:06.717 11:08:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:06.717 11:08:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:06.717 11:08:48 -- common/autotest_common.sh@861 -- # break 00:05:06.717 11:08:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:06.717 11:08:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:06.717 11:08:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.717 1+0 records in 00:05:06.717 1+0 records out 00:05:06.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039579 s, 10.3 MB/s 00:05:06.717 11:08:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.717 11:08:48 -- common/autotest_common.sh@874 -- # size=4096 00:05:06.717 11:08:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.717 11:08:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:06.717 11:08:48 -- common/autotest_common.sh@877 -- # return 0 00:05:06.717 11:08:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.717 11:08:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.717 11:08:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.976 /dev/nbd1 00:05:06.976 11:08:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.976 11:08:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.976 11:08:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:06.976 11:08:48 -- common/autotest_common.sh@857 -- # local i 00:05:06.976 11:08:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:06.976 11:08:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:06.976 11:08:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:06.976 11:08:48 -- common/autotest_common.sh@861 -- # break 00:05:06.976 11:08:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:06.976 11:08:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:06.976 11:08:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.976 1+0 records in 00:05:06.976 1+0 records out 00:05:06.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220278 s, 18.6 MB/s 00:05:06.976 11:08:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.976 11:08:48 -- common/autotest_common.sh@874 -- # size=4096 00:05:06.976 11:08:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.976 11:08:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:06.976 11:08:48 -- common/autotest_common.sh@877 -- # return 0 00:05:06.976 11:08:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.976 11:08:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.976 11:08:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.976 11:08:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.976 11:08:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.235 11:08:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:07.235 { 00:05:07.235 "nbd_device": "/dev/nbd0", 00:05:07.235 "bdev_name": "Malloc0" 00:05:07.235 }, 00:05:07.235 { 00:05:07.235 "nbd_device": "/dev/nbd1", 00:05:07.235 "bdev_name": "Malloc1" 00:05:07.235 } 00:05:07.235 ]' 00:05:07.235 11:08:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:07.235 { 00:05:07.235 "nbd_device": "/dev/nbd0", 00:05:07.235 "bdev_name": "Malloc0" 00:05:07.235 }, 00:05:07.235 { 00:05:07.235 "nbd_device": "/dev/nbd1", 00:05:07.235 "bdev_name": "Malloc1" 00:05:07.235 } 00:05:07.235 ]' 00:05:07.235 11:08:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.235 11:08:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:07.235 /dev/nbd1' 00:05:07.235 11:08:48 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:07.235 /dev/nbd1' 00:05:07.235 11:08:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@65 -- # count=2 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@95 -- # count=2 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:07.494 256+0 records in 00:05:07.494 256+0 records out 00:05:07.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00881652 s, 119 MB/s 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:07.494 256+0 records in 00:05:07.494 256+0 records out 00:05:07.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219325 s, 47.8 MB/s 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:07.494 256+0 records in 00:05:07.494 256+0 records out 00:05:07.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300593 s, 34.9 MB/s 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@51 -- # local i 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.494 11:08:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.754 11:08:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.754 11:08:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.754 11:08:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.754 11:08:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.754 11:08:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.754 11:08:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.754 11:08:49 -- bdev/nbd_common.sh@41 -- # break 00:05:07.754 11:08:49 -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.754 11:08:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.754 11:08:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:08.016 11:08:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:08.016 11:08:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:08.016 11:08:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:08.016 11:08:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.016 11:08:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.016 11:08:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:08.016 11:08:49 -- bdev/nbd_common.sh@41 -- # break 00:05:08.016 11:08:49 -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.016 11:08:49 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.016 11:08:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.016 11:08:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.281 11:08:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.281 11:08:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.281 11:08:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.281 11:08:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.281 11:08:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.281 11:08:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.281 11:08:49 -- bdev/nbd_common.sh@65 -- # true 00:05:08.281 11:08:49 -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.281 11:08:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.281 11:08:49 -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.281 11:08:49 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.281 11:08:49 -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.281 11:08:49 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:08.540 11:08:50 -- event/event.sh@35 -- # sleep 3 00:05:08.798 [2024-10-13 11:08:50.222337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.798 [2024-10-13 11:08:50.269378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.798 [2024-10-13 11:08:50.269386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.798 [2024-10-13 11:08:50.297690] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.798 [2024-10-13 11:08:50.297763] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:12.117 spdk_app_start Round 2 00:05:12.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.117 11:08:53 -- event/event.sh@23 -- # for i in {0..2} 00:05:12.117 11:08:53 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:12.117 11:08:53 -- event/event.sh@25 -- # waitforlisten 54937 /var/tmp/spdk-nbd.sock 00:05:12.117 11:08:53 -- common/autotest_common.sh@819 -- # '[' -z 54937 ']' 00:05:12.117 11:08:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.117 11:08:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:12.117 11:08:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.117 11:08:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:12.117 11:08:53 -- common/autotest_common.sh@10 -- # set +x 00:05:12.117 11:08:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:12.117 11:08:53 -- common/autotest_common.sh@852 -- # return 0 00:05:12.117 11:08:53 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.117 Malloc0 00:05:12.117 11:08:53 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.376 Malloc1 00:05:12.376 11:08:53 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.376 11:08:53 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.376 11:08:53 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.376 11:08:53 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.376 11:08:53 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.376 11:08:53 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.376 11:08:53 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.376 11:08:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.376 11:08:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.377 11:08:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.377 11:08:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.377 11:08:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.377 11:08:53 -- bdev/nbd_common.sh@12 -- # local i 00:05:12.377 11:08:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.377 11:08:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.377 11:08:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.635 /dev/nbd0 00:05:12.635 11:08:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.635 11:08:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.635 11:08:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:12.635 11:08:54 -- common/autotest_common.sh@857 -- # local i 00:05:12.635 11:08:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:12.635 11:08:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:12.635 11:08:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:12.636 11:08:54 -- common/autotest_common.sh@861 -- # break 00:05:12.636 11:08:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:12.636 11:08:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:12.636 11:08:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.636 1+0 records in 00:05:12.636 1+0 records out 00:05:12.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300642 s, 13.6 MB/s 00:05:12.636 11:08:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.636 11:08:54 -- common/autotest_common.sh@874 -- # size=4096 00:05:12.636 11:08:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.636 11:08:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:12.636 11:08:54 -- common/autotest_common.sh@877 -- # return 0 00:05:12.636 11:08:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.636 11:08:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.636 11:08:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.895 /dev/nbd1 00:05:12.895 11:08:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.895 11:08:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.895 11:08:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:12.895 11:08:54 -- common/autotest_common.sh@857 -- # local i 00:05:12.895 11:08:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:12.895 11:08:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:12.895 11:08:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:12.895 11:08:54 -- common/autotest_common.sh@861 -- # break 00:05:12.895 11:08:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:12.895 11:08:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:12.895 11:08:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.895 1+0 records in 00:05:12.895 1+0 records out 00:05:12.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321223 s, 12.8 MB/s 00:05:12.895 11:08:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.895 11:08:54 -- common/autotest_common.sh@874 -- # size=4096 00:05:12.895 11:08:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.895 11:08:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:12.895 11:08:54 -- common/autotest_common.sh@877 -- # return 0 00:05:12.895 11:08:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.895 11:08:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.895 11:08:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.895 11:08:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.895 11:08:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.155 { 00:05:13.155 "nbd_device": "/dev/nbd0", 00:05:13.155 "bdev_name": "Malloc0" 00:05:13.155 }, 00:05:13.155 { 00:05:13.155 "nbd_device": "/dev/nbd1", 00:05:13.155 "bdev_name": "Malloc1" 00:05:13.155 } 00:05:13.155 ]' 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.155 { 00:05:13.155 "nbd_device": "/dev/nbd0", 00:05:13.155 "bdev_name": "Malloc0" 00:05:13.155 }, 00:05:13.155 { 00:05:13.155 "nbd_device": "/dev/nbd1", 00:05:13.155 "bdev_name": "Malloc1" 00:05:13.155 } 00:05:13.155 ]' 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.155 /dev/nbd1' 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.155 /dev/nbd1' 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.155 256+0 records in 00:05:13.155 256+0 records out 00:05:13.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00599451 s, 175 MB/s 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.155 256+0 records in 00:05:13.155 256+0 records out 00:05:13.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022937 s, 45.7 MB/s 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.155 256+0 records in 00:05:13.155 256+0 records out 00:05:13.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230271 s, 45.5 MB/s 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.155 11:08:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@51 -- # local i 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@41 -- # break 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.414 11:08:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.673 11:08:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.673 11:08:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.673 11:08:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.673 11:08:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.673 11:08:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.673 11:08:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.932 11:08:55 -- bdev/nbd_common.sh@41 -- # break 00:05:13.932 11:08:55 -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.932 11:08:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.932 11:08:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.932 11:08:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.192 11:08:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.192 11:08:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.192 11:08:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.192 11:08:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.192 11:08:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.192 11:08:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.192 11:08:55 -- bdev/nbd_common.sh@65 -- # true 00:05:14.192 11:08:55 -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.192 11:08:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.192 11:08:55 -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.192 11:08:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.192 11:08:55 -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.192 11:08:55 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.451 11:08:55 -- event/event.sh@35 -- # sleep 3 00:05:14.451 [2024-10-13 11:08:56.042094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.711 [2024-10-13 11:08:56.091173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.711 [2024-10-13 11:08:56.091184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.711 [2024-10-13 11:08:56.121275] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.711 [2024-10-13 11:08:56.121348] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.999 11:08:58 -- event/event.sh@38 -- # waitforlisten 54937 /var/tmp/spdk-nbd.sock 00:05:17.999 11:08:58 -- common/autotest_common.sh@819 -- # '[' -z 54937 ']' 00:05:17.999 11:08:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.999 11:08:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:17.999 11:08:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.999 11:08:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:17.999 11:08:58 -- common/autotest_common.sh@10 -- # set +x 00:05:17.999 11:08:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:17.999 11:08:59 -- common/autotest_common.sh@852 -- # return 0 00:05:17.999 11:08:59 -- event/event.sh@39 -- # killprocess 54937 00:05:17.999 11:08:59 -- common/autotest_common.sh@926 -- # '[' -z 54937 ']' 00:05:17.999 11:08:59 -- common/autotest_common.sh@930 -- # kill -0 54937 00:05:17.999 11:08:59 -- common/autotest_common.sh@931 -- # uname 00:05:17.999 11:08:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:17.999 11:08:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54937 00:05:17.999 killing process with pid 54937 00:05:17.999 11:08:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:17.999 11:08:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:17.999 11:08:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54937' 00:05:17.999 11:08:59 -- common/autotest_common.sh@945 -- # kill 54937 00:05:17.999 11:08:59 -- common/autotest_common.sh@950 -- # wait 54937 00:05:17.999 spdk_app_start is called in Round 0. 00:05:17.999 Shutdown signal received, stop current app iteration 00:05:17.999 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:05:17.999 spdk_app_start is called in Round 1. 00:05:17.999 Shutdown signal received, stop current app iteration 00:05:17.999 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:05:17.999 spdk_app_start is called in Round 2. 00:05:17.999 Shutdown signal received, stop current app iteration 00:05:17.999 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:05:17.999 spdk_app_start is called in Round 3. 00:05:17.999 Shutdown signal received, stop current app iteration 00:05:17.999 ************************************ 00:05:17.999 END TEST app_repeat 00:05:17.999 ************************************ 00:05:17.999 11:08:59 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:17.999 11:08:59 -- event/event.sh@42 -- # return 0 00:05:17.999 00:05:17.999 real 0m19.106s 00:05:17.999 user 0m43.652s 00:05:17.999 sys 0m2.428s 00:05:17.999 11:08:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.999 11:08:59 -- common/autotest_common.sh@10 -- # set +x 00:05:17.999 11:08:59 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:17.999 11:08:59 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:17.999 11:08:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.999 11:08:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.999 11:08:59 -- common/autotest_common.sh@10 -- # set +x 00:05:17.999 ************************************ 00:05:17.999 START TEST cpu_locks 00:05:17.999 ************************************ 00:05:17.999 11:08:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:17.999 * Looking for test storage... 00:05:17.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:17.999 11:08:59 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:17.999 11:08:59 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:17.999 11:08:59 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:17.999 11:08:59 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:17.999 11:08:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.999 11:08:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.999 11:08:59 -- common/autotest_common.sh@10 -- # set +x 00:05:17.999 ************************************ 00:05:17.999 START TEST default_locks 00:05:17.999 ************************************ 00:05:17.999 11:08:59 -- common/autotest_common.sh@1104 -- # default_locks 00:05:17.999 11:08:59 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=55369 00:05:17.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.999 11:08:59 -- event/cpu_locks.sh@47 -- # waitforlisten 55369 00:05:18.000 11:08:59 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.000 11:08:59 -- common/autotest_common.sh@819 -- # '[' -z 55369 ']' 00:05:18.000 11:08:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.000 11:08:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:18.000 11:08:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.000 11:08:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:18.000 11:08:59 -- common/autotest_common.sh@10 -- # set +x 00:05:18.000 [2024-10-13 11:08:59.551655] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:18.000 [2024-10-13 11:08:59.551759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55369 ] 00:05:18.259 [2024-10-13 11:08:59.686771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.259 [2024-10-13 11:08:59.742196] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:18.259 [2024-10-13 11:08:59.742595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.195 11:09:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:19.195 11:09:00 -- common/autotest_common.sh@852 -- # return 0 00:05:19.195 11:09:00 -- event/cpu_locks.sh@49 -- # locks_exist 55369 00:05:19.195 11:09:00 -- event/cpu_locks.sh@22 -- # lslocks -p 55369 00:05:19.195 11:09:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.195 11:09:00 -- event/cpu_locks.sh@50 -- # killprocess 55369 00:05:19.195 11:09:00 -- common/autotest_common.sh@926 -- # '[' -z 55369 ']' 00:05:19.195 11:09:00 -- common/autotest_common.sh@930 -- # kill -0 55369 00:05:19.195 11:09:00 -- common/autotest_common.sh@931 -- # uname 00:05:19.195 11:09:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:19.195 11:09:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55369 00:05:19.195 killing process with pid 55369 00:05:19.195 11:09:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:19.195 11:09:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:19.195 11:09:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55369' 00:05:19.195 11:09:00 -- common/autotest_common.sh@945 -- # kill 55369 00:05:19.195 11:09:00 -- common/autotest_common.sh@950 -- # wait 55369 00:05:19.763 11:09:01 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 55369 00:05:19.763 11:09:01 -- common/autotest_common.sh@640 -- # local es=0 00:05:19.763 11:09:01 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 55369 00:05:19.763 11:09:01 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:19.763 11:09:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:19.763 11:09:01 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:19.763 11:09:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:19.763 11:09:01 -- common/autotest_common.sh@643 -- # waitforlisten 55369 00:05:19.763 11:09:01 -- common/autotest_common.sh@819 -- # '[' -z 55369 ']' 00:05:19.763 11:09:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.763 ERROR: process (pid: 55369) is no longer running 00:05:19.763 11:09:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:19.763 11:09:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.763 11:09:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:19.763 11:09:01 -- common/autotest_common.sh@10 -- # set +x 00:05:19.763 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (55369) - No such process 00:05:19.763 11:09:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:19.763 11:09:01 -- common/autotest_common.sh@852 -- # return 1 00:05:19.763 11:09:01 -- common/autotest_common.sh@643 -- # es=1 00:05:19.763 11:09:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:19.763 11:09:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:19.763 11:09:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:19.763 11:09:01 -- event/cpu_locks.sh@54 -- # no_locks 00:05:19.763 11:09:01 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:19.763 ************************************ 00:05:19.763 END TEST default_locks 00:05:19.763 ************************************ 00:05:19.763 11:09:01 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:19.763 11:09:01 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:19.763 00:05:19.763 real 0m1.578s 00:05:19.763 user 0m1.761s 00:05:19.763 sys 0m0.380s 00:05:19.763 11:09:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.763 11:09:01 -- common/autotest_common.sh@10 -- # set +x 00:05:19.763 11:09:01 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:19.763 11:09:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.763 11:09:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.763 11:09:01 -- common/autotest_common.sh@10 -- # set +x 00:05:19.763 ************************************ 00:05:19.763 START TEST default_locks_via_rpc 00:05:19.763 ************************************ 00:05:19.763 11:09:01 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:19.763 11:09:01 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=55415 00:05:19.763 11:09:01 -- event/cpu_locks.sh@63 -- # waitforlisten 55415 00:05:19.763 11:09:01 -- common/autotest_common.sh@819 -- # '[' -z 55415 ']' 00:05:19.763 11:09:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.763 11:09:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:19.763 11:09:01 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.764 11:09:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.764 11:09:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:19.764 11:09:01 -- common/autotest_common.sh@10 -- # set +x 00:05:19.764 [2024-10-13 11:09:01.175816] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:19.764 [2024-10-13 11:09:01.175922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55415 ] 00:05:19.764 [2024-10-13 11:09:01.314301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.023 [2024-10-13 11:09:01.371256] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.023 [2024-10-13 11:09:01.371444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.589 11:09:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:20.589 11:09:02 -- common/autotest_common.sh@852 -- # return 0 00:05:20.589 11:09:02 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:20.589 11:09:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:20.589 11:09:02 -- common/autotest_common.sh@10 -- # set +x 00:05:20.589 11:09:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:20.589 11:09:02 -- event/cpu_locks.sh@67 -- # no_locks 00:05:20.589 11:09:02 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.589 11:09:02 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.589 11:09:02 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.590 11:09:02 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.590 11:09:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:20.590 11:09:02 -- common/autotest_common.sh@10 -- # set +x 00:05:20.848 11:09:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:20.848 11:09:02 -- event/cpu_locks.sh@71 -- # locks_exist 55415 00:05:20.848 11:09:02 -- event/cpu_locks.sh@22 -- # lslocks -p 55415 00:05:20.848 11:09:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.106 11:09:02 -- event/cpu_locks.sh@73 -- # killprocess 55415 00:05:21.106 11:09:02 -- common/autotest_common.sh@926 -- # '[' -z 55415 ']' 00:05:21.106 11:09:02 -- common/autotest_common.sh@930 -- # kill -0 55415 00:05:21.106 11:09:02 -- common/autotest_common.sh@931 -- # uname 00:05:21.106 11:09:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:21.106 11:09:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55415 00:05:21.106 11:09:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:21.106 killing process with pid 55415 00:05:21.106 11:09:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:21.106 11:09:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55415' 00:05:21.106 11:09:02 -- common/autotest_common.sh@945 -- # kill 55415 00:05:21.106 11:09:02 -- common/autotest_common.sh@950 -- # wait 55415 00:05:21.365 00:05:21.365 real 0m1.824s 00:05:21.365 user 0m2.116s 00:05:21.365 sys 0m0.463s 00:05:21.365 ************************************ 00:05:21.365 11:09:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.365 11:09:02 -- common/autotest_common.sh@10 -- # set +x 00:05:21.365 END TEST default_locks_via_rpc 00:05:21.365 ************************************ 00:05:21.623 11:09:02 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:21.623 11:09:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.623 11:09:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.623 11:09:02 -- common/autotest_common.sh@10 -- # set +x 00:05:21.623 ************************************ 00:05:21.623 START TEST non_locking_app_on_locked_coremask 00:05:21.623 ************************************ 00:05:21.623 11:09:02 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:21.623 11:09:02 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=55461 00:05:21.623 11:09:02 -- event/cpu_locks.sh@81 -- # waitforlisten 55461 /var/tmp/spdk.sock 00:05:21.624 11:09:02 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.624 11:09:02 -- common/autotest_common.sh@819 -- # '[' -z 55461 ']' 00:05:21.624 11:09:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.624 11:09:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:21.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.624 11:09:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.624 11:09:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:21.624 11:09:02 -- common/autotest_common.sh@10 -- # set +x 00:05:21.624 [2024-10-13 11:09:03.052947] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:21.624 [2024-10-13 11:09:03.053045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55461 ] 00:05:21.624 [2024-10-13 11:09:03.191002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.882 [2024-10-13 11:09:03.242508] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:21.882 [2024-10-13 11:09:03.242737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.448 11:09:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:22.448 11:09:04 -- common/autotest_common.sh@852 -- # return 0 00:05:22.448 11:09:04 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:22.448 11:09:04 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=55477 00:05:22.448 11:09:04 -- event/cpu_locks.sh@85 -- # waitforlisten 55477 /var/tmp/spdk2.sock 00:05:22.448 11:09:04 -- common/autotest_common.sh@819 -- # '[' -z 55477 ']' 00:05:22.448 11:09:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.448 11:09:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:22.448 11:09:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.448 11:09:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:22.448 11:09:04 -- common/autotest_common.sh@10 -- # set +x 00:05:22.712 [2024-10-13 11:09:04.062214] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:22.712 [2024-10-13 11:09:04.062319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55477 ] 00:05:22.712 [2024-10-13 11:09:04.200074] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.712 [2024-10-13 11:09:04.200124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.712 [2024-10-13 11:09:04.308004] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.712 [2024-10-13 11:09:04.308197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.656 11:09:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:23.656 11:09:05 -- common/autotest_common.sh@852 -- # return 0 00:05:23.656 11:09:05 -- event/cpu_locks.sh@87 -- # locks_exist 55461 00:05:23.656 11:09:05 -- event/cpu_locks.sh@22 -- # lslocks -p 55461 00:05:23.656 11:09:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.227 11:09:05 -- event/cpu_locks.sh@89 -- # killprocess 55461 00:05:24.227 11:09:05 -- common/autotest_common.sh@926 -- # '[' -z 55461 ']' 00:05:24.227 11:09:05 -- common/autotest_common.sh@930 -- # kill -0 55461 00:05:24.227 11:09:05 -- common/autotest_common.sh@931 -- # uname 00:05:24.227 11:09:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:24.227 11:09:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55461 00:05:24.486 11:09:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:24.486 11:09:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:24.486 killing process with pid 55461 00:05:24.486 11:09:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55461' 00:05:24.486 11:09:05 -- common/autotest_common.sh@945 -- # kill 55461 00:05:24.486 11:09:05 -- common/autotest_common.sh@950 -- # wait 55461 00:05:25.053 11:09:06 -- event/cpu_locks.sh@90 -- # killprocess 55477 00:05:25.053 11:09:06 -- common/autotest_common.sh@926 -- # '[' -z 55477 ']' 00:05:25.053 11:09:06 -- common/autotest_common.sh@930 -- # kill -0 55477 00:05:25.053 11:09:06 -- common/autotest_common.sh@931 -- # uname 00:05:25.053 11:09:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:25.053 11:09:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55477 00:05:25.053 11:09:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:25.053 11:09:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:25.053 killing process with pid 55477 00:05:25.053 11:09:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55477' 00:05:25.053 11:09:06 -- common/autotest_common.sh@945 -- # kill 55477 00:05:25.053 11:09:06 -- common/autotest_common.sh@950 -- # wait 55477 00:05:25.312 00:05:25.312 real 0m3.661s 00:05:25.312 user 0m4.315s 00:05:25.312 sys 0m0.865s 00:05:25.312 11:09:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.312 ************************************ 00:05:25.312 END TEST non_locking_app_on_locked_coremask 00:05:25.312 ************************************ 00:05:25.312 11:09:06 -- common/autotest_common.sh@10 -- # set +x 00:05:25.312 11:09:06 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:25.312 11:09:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.312 11:09:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.312 11:09:06 -- common/autotest_common.sh@10 -- # set +x 00:05:25.312 ************************************ 00:05:25.312 START TEST locking_app_on_unlocked_coremask 00:05:25.312 ************************************ 00:05:25.312 11:09:06 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:25.312 11:09:06 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=55544 00:05:25.312 11:09:06 -- event/cpu_locks.sh@99 -- # waitforlisten 55544 /var/tmp/spdk.sock 00:05:25.312 11:09:06 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:25.312 11:09:06 -- common/autotest_common.sh@819 -- # '[' -z 55544 ']' 00:05:25.312 11:09:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.312 11:09:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:25.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.312 11:09:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.312 11:09:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:25.312 11:09:06 -- common/autotest_common.sh@10 -- # set +x 00:05:25.312 [2024-10-13 11:09:06.767006] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:25.312 [2024-10-13 11:09:06.767113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55544 ] 00:05:25.312 [2024-10-13 11:09:06.902157] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.312 [2024-10-13 11:09:06.902210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.570 [2024-10-13 11:09:06.953368] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.570 [2024-10-13 11:09:06.953546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.505 11:09:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.505 11:09:07 -- common/autotest_common.sh@852 -- # return 0 00:05:26.505 11:09:07 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=55560 00:05:26.505 11:09:07 -- event/cpu_locks.sh@103 -- # waitforlisten 55560 /var/tmp/spdk2.sock 00:05:26.505 11:09:07 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:26.505 11:09:07 -- common/autotest_common.sh@819 -- # '[' -z 55560 ']' 00:05:26.505 11:09:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.505 11:09:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:26.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.505 11:09:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.505 11:09:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:26.505 11:09:07 -- common/autotest_common.sh@10 -- # set +x 00:05:26.505 [2024-10-13 11:09:07.826920] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:26.505 [2024-10-13 11:09:07.827025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55560 ] 00:05:26.505 [2024-10-13 11:09:07.964821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.505 [2024-10-13 11:09:08.070259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:26.505 [2024-10-13 11:09:08.074479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.439 11:09:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:27.439 11:09:08 -- common/autotest_common.sh@852 -- # return 0 00:05:27.439 11:09:08 -- event/cpu_locks.sh@105 -- # locks_exist 55560 00:05:27.439 11:09:08 -- event/cpu_locks.sh@22 -- # lslocks -p 55560 00:05:27.439 11:09:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.374 11:09:09 -- event/cpu_locks.sh@107 -- # killprocess 55544 00:05:28.374 11:09:09 -- common/autotest_common.sh@926 -- # '[' -z 55544 ']' 00:05:28.374 11:09:09 -- common/autotest_common.sh@930 -- # kill -0 55544 00:05:28.374 11:09:09 -- common/autotest_common.sh@931 -- # uname 00:05:28.374 11:09:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:28.374 11:09:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55544 00:05:28.374 killing process with pid 55544 00:05:28.374 11:09:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:28.374 11:09:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:28.374 11:09:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55544' 00:05:28.374 11:09:09 -- common/autotest_common.sh@945 -- # kill 55544 00:05:28.374 11:09:09 -- common/autotest_common.sh@950 -- # wait 55544 00:05:28.633 11:09:10 -- event/cpu_locks.sh@108 -- # killprocess 55560 00:05:28.633 11:09:10 -- common/autotest_common.sh@926 -- # '[' -z 55560 ']' 00:05:28.633 11:09:10 -- common/autotest_common.sh@930 -- # kill -0 55560 00:05:28.633 11:09:10 -- common/autotest_common.sh@931 -- # uname 00:05:28.633 11:09:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:28.633 11:09:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55560 00:05:28.633 killing process with pid 55560 00:05:28.633 11:09:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:28.633 11:09:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:28.633 11:09:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55560' 00:05:28.633 11:09:10 -- common/autotest_common.sh@945 -- # kill 55560 00:05:28.633 11:09:10 -- common/autotest_common.sh@950 -- # wait 55560 00:05:28.891 ************************************ 00:05:28.891 END TEST locking_app_on_unlocked_coremask 00:05:28.891 ************************************ 00:05:28.891 00:05:28.891 real 0m3.776s 00:05:28.891 user 0m4.511s 00:05:28.891 sys 0m0.881s 00:05:28.891 11:09:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.891 11:09:10 -- common/autotest_common.sh@10 -- # set +x 00:05:29.150 11:09:10 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:29.150 11:09:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.150 11:09:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.150 11:09:10 -- common/autotest_common.sh@10 -- # set +x 00:05:29.150 ************************************ 00:05:29.150 START TEST locking_app_on_locked_coremask 00:05:29.150 ************************************ 00:05:29.150 11:09:10 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:29.150 11:09:10 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=55627 00:05:29.150 11:09:10 -- event/cpu_locks.sh@116 -- # waitforlisten 55627 /var/tmp/spdk.sock 00:05:29.150 11:09:10 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.150 11:09:10 -- common/autotest_common.sh@819 -- # '[' -z 55627 ']' 00:05:29.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.150 11:09:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.150 11:09:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:29.150 11:09:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.150 11:09:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:29.150 11:09:10 -- common/autotest_common.sh@10 -- # set +x 00:05:29.150 [2024-10-13 11:09:10.592305] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:29.150 [2024-10-13 11:09:10.592608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55627 ] 00:05:29.150 [2024-10-13 11:09:10.729960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.409 [2024-10-13 11:09:10.783996] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.409 [2024-10-13 11:09:10.784461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.976 11:09:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:29.976 11:09:11 -- common/autotest_common.sh@852 -- # return 0 00:05:29.976 11:09:11 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:29.976 11:09:11 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=55643 00:05:29.976 11:09:11 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 55643 /var/tmp/spdk2.sock 00:05:29.976 11:09:11 -- common/autotest_common.sh@640 -- # local es=0 00:05:29.976 11:09:11 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 55643 /var/tmp/spdk2.sock 00:05:29.976 11:09:11 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:29.976 11:09:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:29.976 11:09:11 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:29.976 11:09:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:29.976 11:09:11 -- common/autotest_common.sh@643 -- # waitforlisten 55643 /var/tmp/spdk2.sock 00:05:29.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.976 11:09:11 -- common/autotest_common.sh@819 -- # '[' -z 55643 ']' 00:05:29.976 11:09:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.976 11:09:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:29.976 11:09:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.976 11:09:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:29.976 11:09:11 -- common/autotest_common.sh@10 -- # set +x 00:05:30.234 [2024-10-13 11:09:11.610499] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:30.234 [2024-10-13 11:09:11.610573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55643 ] 00:05:30.234 [2024-10-13 11:09:11.745712] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 55627 has claimed it. 00:05:30.234 [2024-10-13 11:09:11.745823] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:30.800 ERROR: process (pid: 55643) is no longer running 00:05:30.800 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (55643) - No such process 00:05:30.800 11:09:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:30.800 11:09:12 -- common/autotest_common.sh@852 -- # return 1 00:05:30.800 11:09:12 -- common/autotest_common.sh@643 -- # es=1 00:05:30.800 11:09:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:30.800 11:09:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:30.800 11:09:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:30.800 11:09:12 -- event/cpu_locks.sh@122 -- # locks_exist 55627 00:05:30.800 11:09:12 -- event/cpu_locks.sh@22 -- # lslocks -p 55627 00:05:30.800 11:09:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.367 11:09:12 -- event/cpu_locks.sh@124 -- # killprocess 55627 00:05:31.367 11:09:12 -- common/autotest_common.sh@926 -- # '[' -z 55627 ']' 00:05:31.367 11:09:12 -- common/autotest_common.sh@930 -- # kill -0 55627 00:05:31.368 11:09:12 -- common/autotest_common.sh@931 -- # uname 00:05:31.368 11:09:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:31.368 11:09:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55627 00:05:31.368 killing process with pid 55627 00:05:31.368 11:09:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:31.368 11:09:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:31.368 11:09:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55627' 00:05:31.368 11:09:12 -- common/autotest_common.sh@945 -- # kill 55627 00:05:31.368 11:09:12 -- common/autotest_common.sh@950 -- # wait 55627 00:05:31.626 ************************************ 00:05:31.626 END TEST locking_app_on_locked_coremask 00:05:31.626 ************************************ 00:05:31.626 00:05:31.626 real 0m2.555s 00:05:31.626 user 0m3.088s 00:05:31.626 sys 0m0.504s 00:05:31.626 11:09:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.626 11:09:13 -- common/autotest_common.sh@10 -- # set +x 00:05:31.626 11:09:13 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:31.626 11:09:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.626 11:09:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.626 11:09:13 -- common/autotest_common.sh@10 -- # set +x 00:05:31.626 ************************************ 00:05:31.626 START TEST locking_overlapped_coremask 00:05:31.626 ************************************ 00:05:31.626 11:09:13 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:31.626 11:09:13 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=55683 00:05:31.626 11:09:13 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:31.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.626 11:09:13 -- event/cpu_locks.sh@133 -- # waitforlisten 55683 /var/tmp/spdk.sock 00:05:31.626 11:09:13 -- common/autotest_common.sh@819 -- # '[' -z 55683 ']' 00:05:31.626 11:09:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.626 11:09:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.626 11:09:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.626 11:09:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.626 11:09:13 -- common/autotest_common.sh@10 -- # set +x 00:05:31.626 [2024-10-13 11:09:13.186205] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:31.626 [2024-10-13 11:09:13.186472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55683 ] 00:05:31.885 [2024-10-13 11:09:13.318771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.885 [2024-10-13 11:09:13.373269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.885 [2024-10-13 11:09:13.373830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.885 [2024-10-13 11:09:13.373878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.885 [2024-10-13 11:09:13.373886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.821 11:09:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.821 11:09:14 -- common/autotest_common.sh@852 -- # return 0 00:05:32.821 11:09:14 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=55701 00:05:32.821 11:09:14 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:32.821 11:09:14 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 55701 /var/tmp/spdk2.sock 00:05:32.821 11:09:14 -- common/autotest_common.sh@640 -- # local es=0 00:05:32.821 11:09:14 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 55701 /var/tmp/spdk2.sock 00:05:32.821 11:09:14 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:32.821 11:09:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:32.821 11:09:14 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:32.821 11:09:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:32.821 11:09:14 -- common/autotest_common.sh@643 -- # waitforlisten 55701 /var/tmp/spdk2.sock 00:05:32.821 11:09:14 -- common/autotest_common.sh@819 -- # '[' -z 55701 ']' 00:05:32.821 11:09:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.821 11:09:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:32.821 11:09:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.821 11:09:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:32.821 11:09:14 -- common/autotest_common.sh@10 -- # set +x 00:05:32.821 [2024-10-13 11:09:14.270996] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:32.821 [2024-10-13 11:09:14.271091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55701 ] 00:05:32.821 [2024-10-13 11:09:14.414205] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55683 has claimed it. 00:05:32.821 [2024-10-13 11:09:14.414288] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:33.388 ERROR: process (pid: 55701) is no longer running 00:05:33.388 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (55701) - No such process 00:05:33.389 11:09:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:33.389 11:09:14 -- common/autotest_common.sh@852 -- # return 1 00:05:33.389 11:09:14 -- common/autotest_common.sh@643 -- # es=1 00:05:33.389 11:09:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:33.389 11:09:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:33.389 11:09:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:33.389 11:09:14 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:33.389 11:09:14 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:33.389 11:09:14 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:33.389 11:09:14 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:33.389 11:09:14 -- event/cpu_locks.sh@141 -- # killprocess 55683 00:05:33.389 11:09:14 -- common/autotest_common.sh@926 -- # '[' -z 55683 ']' 00:05:33.389 11:09:14 -- common/autotest_common.sh@930 -- # kill -0 55683 00:05:33.389 11:09:14 -- common/autotest_common.sh@931 -- # uname 00:05:33.389 11:09:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:33.389 11:09:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55683 00:05:33.389 killing process with pid 55683 00:05:33.389 11:09:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:33.389 11:09:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:33.389 11:09:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55683' 00:05:33.389 11:09:14 -- common/autotest_common.sh@945 -- # kill 55683 00:05:33.389 11:09:14 -- common/autotest_common.sh@950 -- # wait 55683 00:05:33.648 ************************************ 00:05:33.648 END TEST locking_overlapped_coremask 00:05:33.648 ************************************ 00:05:33.648 00:05:33.648 real 0m2.096s 00:05:33.648 user 0m6.034s 00:05:33.648 sys 0m0.335s 00:05:33.648 11:09:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.648 11:09:15 -- common/autotest_common.sh@10 -- # set +x 00:05:33.908 11:09:15 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:33.908 11:09:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:33.908 11:09:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.908 11:09:15 -- common/autotest_common.sh@10 -- # set +x 00:05:33.908 ************************************ 00:05:33.908 START TEST locking_overlapped_coremask_via_rpc 00:05:33.908 ************************************ 00:05:33.908 11:09:15 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:33.908 11:09:15 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=55741 00:05:33.908 11:09:15 -- event/cpu_locks.sh@149 -- # waitforlisten 55741 /var/tmp/spdk.sock 00:05:33.908 11:09:15 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:33.908 11:09:15 -- common/autotest_common.sh@819 -- # '[' -z 55741 ']' 00:05:33.908 11:09:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.908 11:09:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:33.908 11:09:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.908 11:09:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:33.908 11:09:15 -- common/autotest_common.sh@10 -- # set +x 00:05:33.908 [2024-10-13 11:09:15.348049] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:33.908 [2024-10-13 11:09:15.348373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55741 ] 00:05:33.908 [2024-10-13 11:09:15.485499] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.908 [2024-10-13 11:09:15.485707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.166 [2024-10-13 11:09:15.540756] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.166 [2024-10-13 11:09:15.541383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.166 [2024-10-13 11:09:15.541488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.166 [2024-10-13 11:09:15.541496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.734 11:09:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:34.734 11:09:16 -- common/autotest_common.sh@852 -- # return 0 00:05:34.734 11:09:16 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:34.734 11:09:16 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=55759 00:05:34.734 11:09:16 -- event/cpu_locks.sh@153 -- # waitforlisten 55759 /var/tmp/spdk2.sock 00:05:34.734 11:09:16 -- common/autotest_common.sh@819 -- # '[' -z 55759 ']' 00:05:34.734 11:09:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.734 11:09:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.734 11:09:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.734 11:09:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.734 11:09:16 -- common/autotest_common.sh@10 -- # set +x 00:05:34.734 [2024-10-13 11:09:16.317086] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:34.734 [2024-10-13 11:09:16.317376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55759 ] 00:05:34.993 [2024-10-13 11:09:16.453139] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.993 [2024-10-13 11:09:16.453193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.993 [2024-10-13 11:09:16.569781] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.993 [2024-10-13 11:09:16.570063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.993 [2024-10-13 11:09:16.570216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.993 [2024-10-13 11:09:16.570216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:35.930 11:09:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.930 11:09:17 -- common/autotest_common.sh@852 -- # return 0 00:05:35.930 11:09:17 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:35.930 11:09:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.930 11:09:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.930 11:09:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:35.930 11:09:17 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.930 11:09:17 -- common/autotest_common.sh@640 -- # local es=0 00:05:35.930 11:09:17 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.930 11:09:17 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:35.930 11:09:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:35.930 11:09:17 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:35.930 11:09:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:35.930 11:09:17 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.930 11:09:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.930 11:09:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.930 [2024-10-13 11:09:17.314499] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55741 has claimed it. 00:05:35.930 request: 00:05:35.930 { 00:05:35.930 "method": "framework_enable_cpumask_locks", 00:05:35.930 "req_id": 1 00:05:35.930 } 00:05:35.930 Got JSON-RPC error response 00:05:35.930 response: 00:05:35.930 { 00:05:35.930 "code": -32603, 00:05:35.930 "message": "Failed to claim CPU core: 2" 00:05:35.930 } 00:05:35.930 11:09:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:35.930 11:09:17 -- common/autotest_common.sh@643 -- # es=1 00:05:35.930 11:09:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:35.930 11:09:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:35.930 11:09:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:35.930 11:09:17 -- event/cpu_locks.sh@158 -- # waitforlisten 55741 /var/tmp/spdk.sock 00:05:35.930 11:09:17 -- common/autotest_common.sh@819 -- # '[' -z 55741 ']' 00:05:35.930 11:09:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.930 11:09:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:35.930 11:09:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.930 11:09:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:35.930 11:09:17 -- common/autotest_common.sh@10 -- # set +x 00:05:36.188 11:09:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:36.188 11:09:17 -- common/autotest_common.sh@852 -- # return 0 00:05:36.188 11:09:17 -- event/cpu_locks.sh@159 -- # waitforlisten 55759 /var/tmp/spdk2.sock 00:05:36.188 11:09:17 -- common/autotest_common.sh@819 -- # '[' -z 55759 ']' 00:05:36.188 11:09:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.188 11:09:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:36.188 11:09:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.188 11:09:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:36.188 11:09:17 -- common/autotest_common.sh@10 -- # set +x 00:05:36.447 11:09:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:36.447 11:09:17 -- common/autotest_common.sh@852 -- # return 0 00:05:36.447 11:09:17 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:36.447 ************************************ 00:05:36.447 END TEST locking_overlapped_coremask_via_rpc 00:05:36.447 ************************************ 00:05:36.447 11:09:17 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:36.447 11:09:17 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:36.447 11:09:17 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:36.447 00:05:36.447 real 0m2.515s 00:05:36.447 user 0m1.249s 00:05:36.447 sys 0m0.186s 00:05:36.447 11:09:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.447 11:09:17 -- common/autotest_common.sh@10 -- # set +x 00:05:36.447 11:09:17 -- event/cpu_locks.sh@174 -- # cleanup 00:05:36.447 11:09:17 -- event/cpu_locks.sh@15 -- # [[ -z 55741 ]] 00:05:36.447 11:09:17 -- event/cpu_locks.sh@15 -- # killprocess 55741 00:05:36.447 11:09:17 -- common/autotest_common.sh@926 -- # '[' -z 55741 ']' 00:05:36.447 11:09:17 -- common/autotest_common.sh@930 -- # kill -0 55741 00:05:36.447 11:09:17 -- common/autotest_common.sh@931 -- # uname 00:05:36.447 11:09:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:36.447 11:09:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55741 00:05:36.447 killing process with pid 55741 00:05:36.447 11:09:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:36.447 11:09:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:36.447 11:09:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55741' 00:05:36.447 11:09:17 -- common/autotest_common.sh@945 -- # kill 55741 00:05:36.447 11:09:17 -- common/autotest_common.sh@950 -- # wait 55741 00:05:36.708 11:09:18 -- event/cpu_locks.sh@16 -- # [[ -z 55759 ]] 00:05:36.708 11:09:18 -- event/cpu_locks.sh@16 -- # killprocess 55759 00:05:36.708 11:09:18 -- common/autotest_common.sh@926 -- # '[' -z 55759 ']' 00:05:36.708 11:09:18 -- common/autotest_common.sh@930 -- # kill -0 55759 00:05:36.708 11:09:18 -- common/autotest_common.sh@931 -- # uname 00:05:36.708 11:09:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:36.708 11:09:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55759 00:05:36.708 killing process with pid 55759 00:05:36.708 11:09:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:36.708 11:09:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:36.708 11:09:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55759' 00:05:36.708 11:09:18 -- common/autotest_common.sh@945 -- # kill 55759 00:05:36.708 11:09:18 -- common/autotest_common.sh@950 -- # wait 55759 00:05:36.975 11:09:18 -- event/cpu_locks.sh@18 -- # rm -f 00:05:36.975 11:09:18 -- event/cpu_locks.sh@1 -- # cleanup 00:05:36.975 11:09:18 -- event/cpu_locks.sh@15 -- # [[ -z 55741 ]] 00:05:36.975 11:09:18 -- event/cpu_locks.sh@15 -- # killprocess 55741 00:05:36.975 11:09:18 -- common/autotest_common.sh@926 -- # '[' -z 55741 ']' 00:05:36.975 11:09:18 -- common/autotest_common.sh@930 -- # kill -0 55741 00:05:36.975 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (55741) - No such process 00:05:36.975 11:09:18 -- common/autotest_common.sh@953 -- # echo 'Process with pid 55741 is not found' 00:05:36.975 Process with pid 55741 is not found 00:05:36.975 11:09:18 -- event/cpu_locks.sh@16 -- # [[ -z 55759 ]] 00:05:36.975 Process with pid 55759 is not found 00:05:36.975 11:09:18 -- event/cpu_locks.sh@16 -- # killprocess 55759 00:05:36.975 11:09:18 -- common/autotest_common.sh@926 -- # '[' -z 55759 ']' 00:05:36.975 11:09:18 -- common/autotest_common.sh@930 -- # kill -0 55759 00:05:36.975 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (55759) - No such process 00:05:36.975 11:09:18 -- common/autotest_common.sh@953 -- # echo 'Process with pid 55759 is not found' 00:05:36.975 11:09:18 -- event/cpu_locks.sh@18 -- # rm -f 00:05:36.975 00:05:36.975 real 0m19.054s 00:05:36.975 user 0m34.613s 00:05:36.975 sys 0m4.247s 00:05:36.975 11:09:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.975 11:09:18 -- common/autotest_common.sh@10 -- # set +x 00:05:36.975 ************************************ 00:05:36.975 END TEST cpu_locks 00:05:36.975 ************************************ 00:05:36.975 ************************************ 00:05:36.975 END TEST event 00:05:36.975 ************************************ 00:05:36.975 00:05:36.975 real 0m46.856s 00:05:36.975 user 1m33.046s 00:05:36.975 sys 0m7.335s 00:05:36.975 11:09:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.975 11:09:18 -- common/autotest_common.sh@10 -- # set +x 00:05:36.975 11:09:18 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:36.975 11:09:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.975 11:09:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.975 11:09:18 -- common/autotest_common.sh@10 -- # set +x 00:05:36.975 ************************************ 00:05:36.975 START TEST thread 00:05:36.975 ************************************ 00:05:36.975 11:09:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:37.234 * Looking for test storage... 00:05:37.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:37.234 11:09:18 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:37.234 11:09:18 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:37.234 11:09:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.234 11:09:18 -- common/autotest_common.sh@10 -- # set +x 00:05:37.234 ************************************ 00:05:37.234 START TEST thread_poller_perf 00:05:37.234 ************************************ 00:05:37.234 11:09:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:37.234 [2024-10-13 11:09:18.642983] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:37.234 [2024-10-13 11:09:18.643079] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55875 ] 00:05:37.234 [2024-10-13 11:09:18.781244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.492 [2024-10-13 11:09:18.836814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.492 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:38.429 [2024-10-13T11:09:20.031Z] ====================================== 00:05:38.429 [2024-10-13T11:09:20.031Z] busy:2208470198 (cyc) 00:05:38.429 [2024-10-13T11:09:20.031Z] total_run_count: 359000 00:05:38.429 [2024-10-13T11:09:20.031Z] tsc_hz: 2200000000 (cyc) 00:05:38.429 [2024-10-13T11:09:20.031Z] ====================================== 00:05:38.429 [2024-10-13T11:09:20.031Z] poller_cost: 6151 (cyc), 2795 (nsec) 00:05:38.429 00:05:38.429 real 0m1.308s 00:05:38.429 user 0m1.161s 00:05:38.429 sys 0m0.039s 00:05:38.429 11:09:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.429 11:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:38.429 ************************************ 00:05:38.429 END TEST thread_poller_perf 00:05:38.429 ************************************ 00:05:38.429 11:09:19 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:38.429 11:09:19 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:38.429 11:09:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.429 11:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:38.429 ************************************ 00:05:38.429 START TEST thread_poller_perf 00:05:38.429 ************************************ 00:05:38.429 11:09:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:38.429 [2024-10-13 11:09:20.004449] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:38.429 [2024-10-13 11:09:20.004726] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55916 ] 00:05:38.688 [2024-10-13 11:09:20.141196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.688 [2024-10-13 11:09:20.188567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.688 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:40.063 [2024-10-13T11:09:21.665Z] ====================================== 00:05:40.064 [2024-10-13T11:09:21.666Z] busy:2203020232 (cyc) 00:05:40.064 [2024-10-13T11:09:21.666Z] total_run_count: 4948000 00:05:40.064 [2024-10-13T11:09:21.666Z] tsc_hz: 2200000000 (cyc) 00:05:40.064 [2024-10-13T11:09:21.666Z] ====================================== 00:05:40.064 [2024-10-13T11:09:21.666Z] poller_cost: 445 (cyc), 202 (nsec) 00:05:40.064 ************************************ 00:05:40.064 END TEST thread_poller_perf 00:05:40.064 ************************************ 00:05:40.064 00:05:40.064 real 0m1.292s 00:05:40.064 user 0m1.147s 00:05:40.064 sys 0m0.037s 00:05:40.064 11:09:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.064 11:09:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.064 11:09:21 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:40.064 ************************************ 00:05:40.064 END TEST thread 00:05:40.064 ************************************ 00:05:40.064 00:05:40.064 real 0m2.779s 00:05:40.064 user 0m2.366s 00:05:40.064 sys 0m0.193s 00:05:40.064 11:09:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.064 11:09:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.064 11:09:21 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:40.064 11:09:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.064 11:09:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.064 11:09:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.064 ************************************ 00:05:40.064 START TEST accel 00:05:40.064 ************************************ 00:05:40.064 11:09:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:40.064 * Looking for test storage... 00:05:40.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:40.064 11:09:21 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:40.064 11:09:21 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:40.064 11:09:21 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:40.064 11:09:21 -- accel/accel.sh@59 -- # spdk_tgt_pid=55984 00:05:40.064 11:09:21 -- accel/accel.sh@60 -- # waitforlisten 55984 00:05:40.064 11:09:21 -- common/autotest_common.sh@819 -- # '[' -z 55984 ']' 00:05:40.064 11:09:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.064 11:09:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:40.064 11:09:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.064 11:09:21 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:40.064 11:09:21 -- accel/accel.sh@58 -- # build_accel_config 00:05:40.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.064 11:09:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:40.064 11:09:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.064 11:09:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.064 11:09:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.064 11:09:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.064 11:09:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.064 11:09:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.064 11:09:21 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.064 11:09:21 -- accel/accel.sh@42 -- # jq -r . 00:05:40.064 [2024-10-13 11:09:21.513065] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:40.064 [2024-10-13 11:09:21.513159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55984 ] 00:05:40.064 [2024-10-13 11:09:21.651111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.322 [2024-10-13 11:09:21.703736] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.322 [2024-10-13 11:09:21.703874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.258 11:09:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:41.258 11:09:22 -- common/autotest_common.sh@852 -- # return 0 00:05:41.258 11:09:22 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:41.258 11:09:22 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:41.258 11:09:22 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:41.258 11:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.258 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:05:41.258 11:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.258 11:09:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # IFS== 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.258 11:09:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.258 11:09:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # IFS== 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.258 11:09:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.258 11:09:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # IFS== 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.258 11:09:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.258 11:09:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # IFS== 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.258 11:09:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.258 11:09:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # IFS== 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.258 11:09:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.258 11:09:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # IFS== 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.258 11:09:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.258 11:09:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # IFS== 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.258 11:09:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.258 11:09:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # IFS== 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.258 11:09:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.258 11:09:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # IFS== 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.258 11:09:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.258 11:09:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # IFS== 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.258 11:09:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.258 11:09:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # IFS== 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.258 11:09:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.258 11:09:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # IFS== 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.258 11:09:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.258 11:09:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # IFS== 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.258 11:09:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.258 11:09:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # IFS== 00:05:41.258 11:09:22 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.258 11:09:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.258 11:09:22 -- accel/accel.sh@67 -- # killprocess 55984 00:05:41.258 11:09:22 -- common/autotest_common.sh@926 -- # '[' -z 55984 ']' 00:05:41.258 11:09:22 -- common/autotest_common.sh@930 -- # kill -0 55984 00:05:41.258 11:09:22 -- common/autotest_common.sh@931 -- # uname 00:05:41.258 11:09:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:41.258 11:09:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55984 00:05:41.258 11:09:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:41.258 11:09:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:41.258 11:09:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55984' 00:05:41.258 killing process with pid 55984 00:05:41.258 11:09:22 -- common/autotest_common.sh@945 -- # kill 55984 00:05:41.258 11:09:22 -- common/autotest_common.sh@950 -- # wait 55984 00:05:41.517 11:09:22 -- accel/accel.sh@68 -- # trap - ERR 00:05:41.517 11:09:22 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:41.517 11:09:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:41.517 11:09:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.517 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:05:41.517 11:09:22 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:41.517 11:09:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:41.517 11:09:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.517 11:09:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.517 11:09:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.517 11:09:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.517 11:09:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.517 11:09:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.517 11:09:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.517 11:09:22 -- accel/accel.sh@42 -- # jq -r . 00:05:41.517 11:09:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.517 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:05:41.517 11:09:22 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:41.517 11:09:22 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:41.517 11:09:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.517 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:05:41.517 ************************************ 00:05:41.517 START TEST accel_missing_filename 00:05:41.517 ************************************ 00:05:41.517 11:09:22 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:41.517 11:09:22 -- common/autotest_common.sh@640 -- # local es=0 00:05:41.517 11:09:22 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:41.517 11:09:22 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:41.517 11:09:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:41.517 11:09:22 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:41.517 11:09:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:41.517 11:09:22 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:41.517 11:09:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:41.517 11:09:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.517 11:09:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.517 11:09:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.517 11:09:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.517 11:09:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.517 11:09:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.517 11:09:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.517 11:09:22 -- accel/accel.sh@42 -- # jq -r . 00:05:41.517 [2024-10-13 11:09:22.997428] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:41.517 [2024-10-13 11:09:22.997534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56036 ] 00:05:41.776 [2024-10-13 11:09:23.133460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.776 [2024-10-13 11:09:23.180705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.776 [2024-10-13 11:09:23.208782] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.776 [2024-10-13 11:09:23.246463] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:41.776 A filename is required. 00:05:41.776 11:09:23 -- common/autotest_common.sh@643 -- # es=234 00:05:41.776 11:09:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:41.776 11:09:23 -- common/autotest_common.sh@652 -- # es=106 00:05:41.776 11:09:23 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:41.776 11:09:23 -- common/autotest_common.sh@660 -- # es=1 00:05:41.776 11:09:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:41.776 00:05:41.776 real 0m0.359s 00:05:41.776 user 0m0.223s 00:05:41.776 sys 0m0.080s 00:05:41.776 ************************************ 00:05:41.776 END TEST accel_missing_filename 00:05:41.776 11:09:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.776 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:41.776 ************************************ 00:05:41.776 11:09:23 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:41.776 11:09:23 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:41.776 11:09:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.776 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:42.035 ************************************ 00:05:42.035 START TEST accel_compress_verify 00:05:42.035 ************************************ 00:05:42.035 11:09:23 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:42.035 11:09:23 -- common/autotest_common.sh@640 -- # local es=0 00:05:42.035 11:09:23 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:42.035 11:09:23 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:42.035 11:09:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:42.035 11:09:23 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:42.035 11:09:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:42.035 11:09:23 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:42.035 11:09:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:42.035 11:09:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.035 11:09:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.035 11:09:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.035 11:09:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.035 11:09:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.035 11:09:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.035 11:09:23 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.035 11:09:23 -- accel/accel.sh@42 -- # jq -r . 00:05:42.035 [2024-10-13 11:09:23.403440] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:42.035 [2024-10-13 11:09:23.403502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56060 ] 00:05:42.035 [2024-10-13 11:09:23.531482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.035 [2024-10-13 11:09:23.580452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.035 [2024-10-13 11:09:23.609729] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:42.293 [2024-10-13 11:09:23.650692] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:42.293 00:05:42.293 Compression does not support the verify option, aborting. 00:05:42.293 11:09:23 -- common/autotest_common.sh@643 -- # es=161 00:05:42.293 11:09:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:42.293 11:09:23 -- common/autotest_common.sh@652 -- # es=33 00:05:42.293 11:09:23 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:42.293 11:09:23 -- common/autotest_common.sh@660 -- # es=1 00:05:42.293 11:09:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:42.294 00:05:42.294 real 0m0.361s 00:05:42.294 user 0m0.242s 00:05:42.294 sys 0m0.062s 00:05:42.294 11:09:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.294 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:42.294 ************************************ 00:05:42.294 END TEST accel_compress_verify 00:05:42.294 ************************************ 00:05:42.294 11:09:23 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:42.294 11:09:23 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:42.294 11:09:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.294 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:42.294 ************************************ 00:05:42.294 START TEST accel_wrong_workload 00:05:42.294 ************************************ 00:05:42.294 11:09:23 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:42.294 11:09:23 -- common/autotest_common.sh@640 -- # local es=0 00:05:42.294 11:09:23 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:42.294 11:09:23 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:42.294 11:09:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:42.294 11:09:23 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:42.294 11:09:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:42.294 11:09:23 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:42.294 11:09:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:42.294 11:09:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.294 11:09:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.294 11:09:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.294 11:09:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.294 11:09:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.294 11:09:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.294 11:09:23 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.294 11:09:23 -- accel/accel.sh@42 -- # jq -r . 00:05:42.294 Unsupported workload type: foobar 00:05:42.294 [2024-10-13 11:09:23.814416] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:42.294 accel_perf options: 00:05:42.294 [-h help message] 00:05:42.294 [-q queue depth per core] 00:05:42.294 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:42.294 [-T number of threads per core 00:05:42.294 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:42.294 [-t time in seconds] 00:05:42.294 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:42.294 [ dif_verify, , dif_generate, dif_generate_copy 00:05:42.294 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:42.294 [-l for compress/decompress workloads, name of uncompressed input file 00:05:42.294 [-S for crc32c workload, use this seed value (default 0) 00:05:42.294 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:42.294 [-f for fill workload, use this BYTE value (default 255) 00:05:42.294 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:42.294 [-y verify result if this switch is on] 00:05:42.294 [-a tasks to allocate per core (default: same value as -q)] 00:05:42.294 Can be used to spread operations across a wider range of memory. 00:05:42.294 11:09:23 -- common/autotest_common.sh@643 -- # es=1 00:05:42.294 ************************************ 00:05:42.294 END TEST accel_wrong_workload 00:05:42.294 ************************************ 00:05:42.294 11:09:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:42.294 11:09:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:42.294 11:09:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:42.294 00:05:42.294 real 0m0.032s 00:05:42.294 user 0m0.019s 00:05:42.294 sys 0m0.013s 00:05:42.294 11:09:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.294 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:42.294 11:09:23 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:42.294 11:09:23 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:42.294 11:09:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.294 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:42.294 ************************************ 00:05:42.294 START TEST accel_negative_buffers 00:05:42.294 ************************************ 00:05:42.294 11:09:23 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:42.294 11:09:23 -- common/autotest_common.sh@640 -- # local es=0 00:05:42.294 11:09:23 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:42.294 11:09:23 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:42.294 11:09:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:42.294 11:09:23 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:42.294 11:09:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:42.294 11:09:23 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:42.294 11:09:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:42.294 11:09:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.294 11:09:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.294 11:09:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.294 11:09:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.294 11:09:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.294 11:09:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.294 11:09:23 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.294 11:09:23 -- accel/accel.sh@42 -- # jq -r . 00:05:42.553 -x option must be non-negative. 00:05:42.553 [2024-10-13 11:09:23.892182] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:42.553 accel_perf options: 00:05:42.553 [-h help message] 00:05:42.553 [-q queue depth per core] 00:05:42.553 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:42.553 [-T number of threads per core 00:05:42.553 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:42.553 [-t time in seconds] 00:05:42.553 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:42.553 [ dif_verify, , dif_generate, dif_generate_copy 00:05:42.553 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:42.553 [-l for compress/decompress workloads, name of uncompressed input file 00:05:42.553 [-S for crc32c workload, use this seed value (default 0) 00:05:42.553 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:42.553 [-f for fill workload, use this BYTE value (default 255) 00:05:42.553 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:42.553 [-y verify result if this switch is on] 00:05:42.553 [-a tasks to allocate per core (default: same value as -q)] 00:05:42.553 Can be used to spread operations across a wider range of memory. 00:05:42.553 11:09:23 -- common/autotest_common.sh@643 -- # es=1 00:05:42.553 11:09:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:42.553 11:09:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:42.553 11:09:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:42.553 00:05:42.553 real 0m0.031s 00:05:42.553 user 0m0.018s 00:05:42.553 sys 0m0.012s 00:05:42.553 11:09:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.553 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:42.553 ************************************ 00:05:42.553 END TEST accel_negative_buffers 00:05:42.553 ************************************ 00:05:42.553 11:09:23 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:42.553 11:09:23 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:42.553 11:09:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.553 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:42.553 ************************************ 00:05:42.553 START TEST accel_crc32c 00:05:42.553 ************************************ 00:05:42.553 11:09:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:42.553 11:09:23 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.553 11:09:23 -- accel/accel.sh@17 -- # local accel_module 00:05:42.553 11:09:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:42.553 11:09:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:42.553 11:09:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.553 11:09:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.553 11:09:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.553 11:09:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.553 11:09:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.553 11:09:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.553 11:09:23 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.553 11:09:23 -- accel/accel.sh@42 -- # jq -r . 00:05:42.553 [2024-10-13 11:09:23.970909] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:42.553 [2024-10-13 11:09:23.970993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56119 ] 00:05:42.553 [2024-10-13 11:09:24.104285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.811 [2024-10-13 11:09:24.151998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.746 11:09:25 -- accel/accel.sh@18 -- # out=' 00:05:43.746 SPDK Configuration: 00:05:43.746 Core mask: 0x1 00:05:43.746 00:05:43.746 Accel Perf Configuration: 00:05:43.746 Workload Type: crc32c 00:05:43.746 CRC-32C seed: 32 00:05:43.746 Transfer size: 4096 bytes 00:05:43.746 Vector count 1 00:05:43.746 Module: software 00:05:43.746 Queue depth: 32 00:05:43.746 Allocate depth: 32 00:05:43.746 # threads/core: 1 00:05:43.746 Run time: 1 seconds 00:05:43.746 Verify: Yes 00:05:43.746 00:05:43.746 Running for 1 seconds... 00:05:43.746 00:05:43.746 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:43.746 ------------------------------------------------------------------------------------ 00:05:43.746 0,0 522240/s 2040 MiB/s 0 0 00:05:43.746 ==================================================================================== 00:05:43.746 Total 522240/s 2040 MiB/s 0 0' 00:05:43.746 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:43.746 11:09:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:43.746 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:43.746 11:09:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.746 11:09:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:43.746 11:09:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.746 11:09:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.746 11:09:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.746 11:09:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.746 11:09:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.746 11:09:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.746 11:09:25 -- accel/accel.sh@42 -- # jq -r . 00:05:43.746 [2024-10-13 11:09:25.326546] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:43.746 [2024-10-13 11:09:25.326831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56138 ] 00:05:44.005 [2024-10-13 11:09:25.454014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.005 [2024-10-13 11:09:25.503406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val= 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val= 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val=0x1 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val= 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val= 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val=crc32c 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val=32 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val= 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val=software 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@23 -- # accel_module=software 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val=32 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val=32 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val=1 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val=Yes 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val= 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:44.005 11:09:25 -- accel/accel.sh@21 -- # val= 00:05:44.005 11:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # IFS=: 00:05:44.005 11:09:25 -- accel/accel.sh@20 -- # read -r var val 00:05:45.380 11:09:26 -- accel/accel.sh@21 -- # val= 00:05:45.380 11:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.380 11:09:26 -- accel/accel.sh@20 -- # IFS=: 00:05:45.380 11:09:26 -- accel/accel.sh@20 -- # read -r var val 00:05:45.380 11:09:26 -- accel/accel.sh@21 -- # val= 00:05:45.380 11:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.380 11:09:26 -- accel/accel.sh@20 -- # IFS=: 00:05:45.380 11:09:26 -- accel/accel.sh@20 -- # read -r var val 00:05:45.380 11:09:26 -- accel/accel.sh@21 -- # val= 00:05:45.380 11:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.380 11:09:26 -- accel/accel.sh@20 -- # IFS=: 00:05:45.380 11:09:26 -- accel/accel.sh@20 -- # read -r var val 00:05:45.380 11:09:26 -- accel/accel.sh@21 -- # val= 00:05:45.380 11:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.380 11:09:26 -- accel/accel.sh@20 -- # IFS=: 00:05:45.380 11:09:26 -- accel/accel.sh@20 -- # read -r var val 00:05:45.380 11:09:26 -- accel/accel.sh@21 -- # val= 00:05:45.380 11:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.380 11:09:26 -- accel/accel.sh@20 -- # IFS=: 00:05:45.380 11:09:26 -- accel/accel.sh@20 -- # read -r var val 00:05:45.380 11:09:26 -- accel/accel.sh@21 -- # val= 00:05:45.380 11:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.380 11:09:26 -- accel/accel.sh@20 -- # IFS=: 00:05:45.380 11:09:26 -- accel/accel.sh@20 -- # read -r var val 00:05:45.380 11:09:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:45.380 ************************************ 00:05:45.380 END TEST accel_crc32c 00:05:45.380 ************************************ 00:05:45.380 11:09:26 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:45.380 11:09:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.380 00:05:45.380 real 0m2.713s 00:05:45.380 user 0m2.378s 00:05:45.380 sys 0m0.136s 00:05:45.380 11:09:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.380 11:09:26 -- common/autotest_common.sh@10 -- # set +x 00:05:45.380 11:09:26 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:45.380 11:09:26 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:45.380 11:09:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.380 11:09:26 -- common/autotest_common.sh@10 -- # set +x 00:05:45.380 ************************************ 00:05:45.380 START TEST accel_crc32c_C2 00:05:45.380 ************************************ 00:05:45.380 11:09:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:45.380 11:09:26 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.380 11:09:26 -- accel/accel.sh@17 -- # local accel_module 00:05:45.380 11:09:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:45.380 11:09:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:45.380 11:09:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.380 11:09:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.380 11:09:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.380 11:09:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.380 11:09:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.380 11:09:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.380 11:09:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.380 11:09:26 -- accel/accel.sh@42 -- # jq -r . 00:05:45.380 [2024-10-13 11:09:26.734507] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:45.380 [2024-10-13 11:09:26.734604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56167 ] 00:05:45.380 [2024-10-13 11:09:26.868786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.380 [2024-10-13 11:09:26.915919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.754 11:09:28 -- accel/accel.sh@18 -- # out=' 00:05:46.754 SPDK Configuration: 00:05:46.754 Core mask: 0x1 00:05:46.754 00:05:46.754 Accel Perf Configuration: 00:05:46.754 Workload Type: crc32c 00:05:46.754 CRC-32C seed: 0 00:05:46.754 Transfer size: 4096 bytes 00:05:46.754 Vector count 2 00:05:46.754 Module: software 00:05:46.754 Queue depth: 32 00:05:46.754 Allocate depth: 32 00:05:46.754 # threads/core: 1 00:05:46.754 Run time: 1 seconds 00:05:46.754 Verify: Yes 00:05:46.754 00:05:46.754 Running for 1 seconds... 00:05:46.754 00:05:46.754 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:46.754 ------------------------------------------------------------------------------------ 00:05:46.754 0,0 407072/s 3180 MiB/s 0 0 00:05:46.754 ==================================================================================== 00:05:46.754 Total 407072/s 1590 MiB/s 0 0' 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:46.754 11:09:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.754 11:09:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:46.754 11:09:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.754 11:09:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.754 11:09:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.754 11:09:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.754 11:09:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.754 11:09:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.754 11:09:28 -- accel/accel.sh@42 -- # jq -r . 00:05:46.754 [2024-10-13 11:09:28.085764] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:46.754 [2024-10-13 11:09:28.086592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56187 ] 00:05:46.754 [2024-10-13 11:09:28.225983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.754 [2024-10-13 11:09:28.283564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val= 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val= 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val=0x1 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val= 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val= 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val=crc32c 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val=0 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val= 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val=software 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@23 -- # accel_module=software 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val=32 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val=32 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val=1 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val=Yes 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val= 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:46.754 11:09:28 -- accel/accel.sh@21 -- # val= 00:05:46.754 11:09:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # IFS=: 00:05:46.754 11:09:28 -- accel/accel.sh@20 -- # read -r var val 00:05:48.131 11:09:29 -- accel/accel.sh@21 -- # val= 00:05:48.131 11:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.131 11:09:29 -- accel/accel.sh@20 -- # IFS=: 00:05:48.131 11:09:29 -- accel/accel.sh@20 -- # read -r var val 00:05:48.131 11:09:29 -- accel/accel.sh@21 -- # val= 00:05:48.131 11:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.131 11:09:29 -- accel/accel.sh@20 -- # IFS=: 00:05:48.131 11:09:29 -- accel/accel.sh@20 -- # read -r var val 00:05:48.131 11:09:29 -- accel/accel.sh@21 -- # val= 00:05:48.131 11:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.131 11:09:29 -- accel/accel.sh@20 -- # IFS=: 00:05:48.131 11:09:29 -- accel/accel.sh@20 -- # read -r var val 00:05:48.131 11:09:29 -- accel/accel.sh@21 -- # val= 00:05:48.131 11:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.131 11:09:29 -- accel/accel.sh@20 -- # IFS=: 00:05:48.131 11:09:29 -- accel/accel.sh@20 -- # read -r var val 00:05:48.131 11:09:29 -- accel/accel.sh@21 -- # val= 00:05:48.131 11:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.131 11:09:29 -- accel/accel.sh@20 -- # IFS=: 00:05:48.131 11:09:29 -- accel/accel.sh@20 -- # read -r var val 00:05:48.131 11:09:29 -- accel/accel.sh@21 -- # val= 00:05:48.131 11:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.131 11:09:29 -- accel/accel.sh@20 -- # IFS=: 00:05:48.131 11:09:29 -- accel/accel.sh@20 -- # read -r var val 00:05:48.131 11:09:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:48.131 11:09:29 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:48.131 11:09:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.131 00:05:48.131 real 0m2.736s 00:05:48.131 user 0m2.395s 00:05:48.131 sys 0m0.143s 00:05:48.131 11:09:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.131 11:09:29 -- common/autotest_common.sh@10 -- # set +x 00:05:48.131 ************************************ 00:05:48.131 END TEST accel_crc32c_C2 00:05:48.131 ************************************ 00:05:48.131 11:09:29 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:48.131 11:09:29 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:48.131 11:09:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.131 11:09:29 -- common/autotest_common.sh@10 -- # set +x 00:05:48.131 ************************************ 00:05:48.131 START TEST accel_copy 00:05:48.131 ************************************ 00:05:48.131 11:09:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:05:48.131 11:09:29 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.131 11:09:29 -- accel/accel.sh@17 -- # local accel_module 00:05:48.131 11:09:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:48.131 11:09:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:48.131 11:09:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.131 11:09:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.131 11:09:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.131 11:09:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.131 11:09:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.131 11:09:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.131 11:09:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.131 11:09:29 -- accel/accel.sh@42 -- # jq -r . 00:05:48.131 [2024-10-13 11:09:29.523085] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:48.131 [2024-10-13 11:09:29.523345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56221 ] 00:05:48.131 [2024-10-13 11:09:29.655530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.131 [2024-10-13 11:09:29.705824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.507 11:09:30 -- accel/accel.sh@18 -- # out=' 00:05:49.507 SPDK Configuration: 00:05:49.507 Core mask: 0x1 00:05:49.507 00:05:49.507 Accel Perf Configuration: 00:05:49.507 Workload Type: copy 00:05:49.507 Transfer size: 4096 bytes 00:05:49.507 Vector count 1 00:05:49.507 Module: software 00:05:49.507 Queue depth: 32 00:05:49.507 Allocate depth: 32 00:05:49.507 # threads/core: 1 00:05:49.507 Run time: 1 seconds 00:05:49.507 Verify: Yes 00:05:49.507 00:05:49.507 Running for 1 seconds... 00:05:49.507 00:05:49.507 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:49.507 ------------------------------------------------------------------------------------ 00:05:49.507 0,0 359872/s 1405 MiB/s 0 0 00:05:49.507 ==================================================================================== 00:05:49.507 Total 359872/s 1405 MiB/s 0 0' 00:05:49.507 11:09:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:49.507 11:09:30 -- accel/accel.sh@20 -- # IFS=: 00:05:49.507 11:09:30 -- accel/accel.sh@20 -- # read -r var val 00:05:49.507 11:09:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:49.507 11:09:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.507 11:09:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.507 11:09:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.507 11:09:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.507 11:09:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.507 11:09:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.507 11:09:30 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.507 11:09:30 -- accel/accel.sh@42 -- # jq -r . 00:05:49.507 [2024-10-13 11:09:30.879350] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:49.507 [2024-10-13 11:09:30.879640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56235 ] 00:05:49.507 [2024-10-13 11:09:31.008476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.507 [2024-10-13 11:09:31.058673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.507 11:09:31 -- accel/accel.sh@21 -- # val= 00:05:49.507 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.507 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.507 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.507 11:09:31 -- accel/accel.sh@21 -- # val= 00:05:49.507 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.507 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.507 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.507 11:09:31 -- accel/accel.sh@21 -- # val=0x1 00:05:49.507 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.507 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.507 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.507 11:09:31 -- accel/accel.sh@21 -- # val= 00:05:49.507 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.507 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.507 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.507 11:09:31 -- accel/accel.sh@21 -- # val= 00:05:49.507 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.507 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.507 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.507 11:09:31 -- accel/accel.sh@21 -- # val=copy 00:05:49.507 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.508 11:09:31 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.508 11:09:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:49.508 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.508 11:09:31 -- accel/accel.sh@21 -- # val= 00:05:49.508 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.508 11:09:31 -- accel/accel.sh@21 -- # val=software 00:05:49.508 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.508 11:09:31 -- accel/accel.sh@23 -- # accel_module=software 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.508 11:09:31 -- accel/accel.sh@21 -- # val=32 00:05:49.508 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.508 11:09:31 -- accel/accel.sh@21 -- # val=32 00:05:49.508 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.508 11:09:31 -- accel/accel.sh@21 -- # val=1 00:05:49.508 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.508 11:09:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:49.508 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.508 11:09:31 -- accel/accel.sh@21 -- # val=Yes 00:05:49.508 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.508 11:09:31 -- accel/accel.sh@21 -- # val= 00:05:49.508 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:49.508 11:09:31 -- accel/accel.sh@21 -- # val= 00:05:49.508 11:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # IFS=: 00:05:49.508 11:09:31 -- accel/accel.sh@20 -- # read -r var val 00:05:50.885 11:09:32 -- accel/accel.sh@21 -- # val= 00:05:50.885 11:09:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.885 11:09:32 -- accel/accel.sh@20 -- # IFS=: 00:05:50.885 11:09:32 -- accel/accel.sh@20 -- # read -r var val 00:05:50.885 11:09:32 -- accel/accel.sh@21 -- # val= 00:05:50.885 11:09:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.885 11:09:32 -- accel/accel.sh@20 -- # IFS=: 00:05:50.885 11:09:32 -- accel/accel.sh@20 -- # read -r var val 00:05:50.885 11:09:32 -- accel/accel.sh@21 -- # val= 00:05:50.885 11:09:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.885 11:09:32 -- accel/accel.sh@20 -- # IFS=: 00:05:50.885 11:09:32 -- accel/accel.sh@20 -- # read -r var val 00:05:50.885 11:09:32 -- accel/accel.sh@21 -- # val= 00:05:50.885 11:09:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.885 11:09:32 -- accel/accel.sh@20 -- # IFS=: 00:05:50.885 11:09:32 -- accel/accel.sh@20 -- # read -r var val 00:05:50.885 11:09:32 -- accel/accel.sh@21 -- # val= 00:05:50.885 11:09:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.885 11:09:32 -- accel/accel.sh@20 -- # IFS=: 00:05:50.885 11:09:32 -- accel/accel.sh@20 -- # read -r var val 00:05:50.885 11:09:32 -- accel/accel.sh@21 -- # val= 00:05:50.885 11:09:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.885 11:09:32 -- accel/accel.sh@20 -- # IFS=: 00:05:50.885 11:09:32 -- accel/accel.sh@20 -- # read -r var val 00:05:50.885 11:09:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:50.885 11:09:32 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:50.885 11:09:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.885 00:05:50.885 real 0m2.717s 00:05:50.885 user 0m2.373s 00:05:50.885 sys 0m0.139s 00:05:50.885 11:09:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.885 11:09:32 -- common/autotest_common.sh@10 -- # set +x 00:05:50.885 ************************************ 00:05:50.885 END TEST accel_copy 00:05:50.885 ************************************ 00:05:50.885 11:09:32 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:50.885 11:09:32 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:50.885 11:09:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.885 11:09:32 -- common/autotest_common.sh@10 -- # set +x 00:05:50.885 ************************************ 00:05:50.885 START TEST accel_fill 00:05:50.885 ************************************ 00:05:50.885 11:09:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:50.885 11:09:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:50.885 11:09:32 -- accel/accel.sh@17 -- # local accel_module 00:05:50.885 11:09:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:50.885 11:09:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:50.885 11:09:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.885 11:09:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.885 11:09:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.885 11:09:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.885 11:09:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.885 11:09:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.885 11:09:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.885 11:09:32 -- accel/accel.sh@42 -- # jq -r . 00:05:50.885 [2024-10-13 11:09:32.286845] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:50.885 [2024-10-13 11:09:32.286935] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56270 ] 00:05:50.885 [2024-10-13 11:09:32.422223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.885 [2024-10-13 11:09:32.471778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.264 11:09:33 -- accel/accel.sh@18 -- # out=' 00:05:52.264 SPDK Configuration: 00:05:52.264 Core mask: 0x1 00:05:52.264 00:05:52.264 Accel Perf Configuration: 00:05:52.264 Workload Type: fill 00:05:52.264 Fill pattern: 0x80 00:05:52.264 Transfer size: 4096 bytes 00:05:52.264 Vector count 1 00:05:52.264 Module: software 00:05:52.264 Queue depth: 64 00:05:52.264 Allocate depth: 64 00:05:52.264 # threads/core: 1 00:05:52.264 Run time: 1 seconds 00:05:52.264 Verify: Yes 00:05:52.264 00:05:52.264 Running for 1 seconds... 00:05:52.264 00:05:52.264 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:52.264 ------------------------------------------------------------------------------------ 00:05:52.264 0,0 499776/s 1952 MiB/s 0 0 00:05:52.264 ==================================================================================== 00:05:52.264 Total 499776/s 1952 MiB/s 0 0' 00:05:52.264 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.264 11:09:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:52.264 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.264 11:09:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:52.264 11:09:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.264 11:09:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.264 11:09:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.264 11:09:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.264 11:09:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.264 11:09:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.264 11:09:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.264 11:09:33 -- accel/accel.sh@42 -- # jq -r . 00:05:52.264 [2024-10-13 11:09:33.653241] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:52.264 [2024-10-13 11:09:33.653350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56289 ] 00:05:52.264 [2024-10-13 11:09:33.789327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.264 [2024-10-13 11:09:33.841017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val= 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val= 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val=0x1 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val= 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val= 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val=fill 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val=0x80 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val= 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val=software 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@23 -- # accel_module=software 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val=64 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val=64 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val=1 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val=Yes 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val= 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:52.523 11:09:33 -- accel/accel.sh@21 -- # val= 00:05:52.523 11:09:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # IFS=: 00:05:52.523 11:09:33 -- accel/accel.sh@20 -- # read -r var val 00:05:53.461 11:09:34 -- accel/accel.sh@21 -- # val= 00:05:53.461 11:09:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.461 11:09:34 -- accel/accel.sh@20 -- # IFS=: 00:05:53.461 11:09:34 -- accel/accel.sh@20 -- # read -r var val 00:05:53.461 11:09:34 -- accel/accel.sh@21 -- # val= 00:05:53.461 11:09:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.461 11:09:34 -- accel/accel.sh@20 -- # IFS=: 00:05:53.461 11:09:34 -- accel/accel.sh@20 -- # read -r var val 00:05:53.461 11:09:34 -- accel/accel.sh@21 -- # val= 00:05:53.461 11:09:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.461 11:09:34 -- accel/accel.sh@20 -- # IFS=: 00:05:53.461 11:09:34 -- accel/accel.sh@20 -- # read -r var val 00:05:53.461 11:09:34 -- accel/accel.sh@21 -- # val= 00:05:53.461 11:09:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.461 11:09:34 -- accel/accel.sh@20 -- # IFS=: 00:05:53.461 11:09:34 -- accel/accel.sh@20 -- # read -r var val 00:05:53.461 11:09:34 -- accel/accel.sh@21 -- # val= 00:05:53.461 11:09:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.461 11:09:34 -- accel/accel.sh@20 -- # IFS=: 00:05:53.461 11:09:34 -- accel/accel.sh@20 -- # read -r var val 00:05:53.461 11:09:34 -- accel/accel.sh@21 -- # val= 00:05:53.461 11:09:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.461 11:09:34 -- accel/accel.sh@20 -- # IFS=: 00:05:53.461 11:09:35 -- accel/accel.sh@20 -- # read -r var val 00:05:53.461 11:09:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:53.461 ************************************ 00:05:53.461 END TEST accel_fill 00:05:53.461 ************************************ 00:05:53.461 11:09:35 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:53.461 11:09:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.461 00:05:53.461 real 0m2.739s 00:05:53.461 user 0m2.392s 00:05:53.461 sys 0m0.142s 00:05:53.461 11:09:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.461 11:09:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.461 11:09:35 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:53.461 11:09:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:53.461 11:09:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.461 11:09:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.461 ************************************ 00:05:53.461 START TEST accel_copy_crc32c 00:05:53.461 ************************************ 00:05:53.461 11:09:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:05:53.461 11:09:35 -- accel/accel.sh@16 -- # local accel_opc 00:05:53.461 11:09:35 -- accel/accel.sh@17 -- # local accel_module 00:05:53.461 11:09:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:53.461 11:09:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:53.461 11:09:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.461 11:09:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.461 11:09:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.461 11:09:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.461 11:09:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.461 11:09:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.461 11:09:35 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.461 11:09:35 -- accel/accel.sh@42 -- # jq -r . 00:05:53.720 [2024-10-13 11:09:35.075145] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:53.720 [2024-10-13 11:09:35.075425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56324 ] 00:05:53.720 [2024-10-13 11:09:35.211280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.720 [2024-10-13 11:09:35.261125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.098 11:09:36 -- accel/accel.sh@18 -- # out=' 00:05:55.098 SPDK Configuration: 00:05:55.098 Core mask: 0x1 00:05:55.098 00:05:55.098 Accel Perf Configuration: 00:05:55.098 Workload Type: copy_crc32c 00:05:55.098 CRC-32C seed: 0 00:05:55.098 Vector size: 4096 bytes 00:05:55.098 Transfer size: 4096 bytes 00:05:55.098 Vector count 1 00:05:55.098 Module: software 00:05:55.098 Queue depth: 32 00:05:55.098 Allocate depth: 32 00:05:55.098 # threads/core: 1 00:05:55.098 Run time: 1 seconds 00:05:55.098 Verify: Yes 00:05:55.098 00:05:55.098 Running for 1 seconds... 00:05:55.098 00:05:55.098 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:55.098 ------------------------------------------------------------------------------------ 00:05:55.098 0,0 272192/s 1063 MiB/s 0 0 00:05:55.098 ==================================================================================== 00:05:55.098 Total 272192/s 1063 MiB/s 0 0' 00:05:55.098 11:09:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:55.098 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.098 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.098 11:09:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:55.098 11:09:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.099 11:09:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.099 11:09:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.099 11:09:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.099 11:09:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.099 11:09:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.099 11:09:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.099 11:09:36 -- accel/accel.sh@42 -- # jq -r . 00:05:55.099 [2024-10-13 11:09:36.429567] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:55.099 [2024-10-13 11:09:36.429652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56338 ] 00:05:55.099 [2024-10-13 11:09:36.559528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.099 [2024-10-13 11:09:36.615316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val= 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val= 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val=0x1 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val= 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val= 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val=0 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val= 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val=software 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@23 -- # accel_module=software 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val=32 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val=32 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val=1 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val=Yes 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val= 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:55.099 11:09:36 -- accel/accel.sh@21 -- # val= 00:05:55.099 11:09:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # IFS=: 00:05:55.099 11:09:36 -- accel/accel.sh@20 -- # read -r var val 00:05:56.483 11:09:37 -- accel/accel.sh@21 -- # val= 00:05:56.483 11:09:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.483 11:09:37 -- accel/accel.sh@20 -- # IFS=: 00:05:56.483 11:09:37 -- accel/accel.sh@20 -- # read -r var val 00:05:56.483 11:09:37 -- accel/accel.sh@21 -- # val= 00:05:56.483 11:09:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.483 11:09:37 -- accel/accel.sh@20 -- # IFS=: 00:05:56.483 11:09:37 -- accel/accel.sh@20 -- # read -r var val 00:05:56.483 11:09:37 -- accel/accel.sh@21 -- # val= 00:05:56.483 11:09:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.483 11:09:37 -- accel/accel.sh@20 -- # IFS=: 00:05:56.483 11:09:37 -- accel/accel.sh@20 -- # read -r var val 00:05:56.483 11:09:37 -- accel/accel.sh@21 -- # val= 00:05:56.483 11:09:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.483 11:09:37 -- accel/accel.sh@20 -- # IFS=: 00:05:56.483 11:09:37 -- accel/accel.sh@20 -- # read -r var val 00:05:56.483 11:09:37 -- accel/accel.sh@21 -- # val= 00:05:56.483 11:09:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.483 11:09:37 -- accel/accel.sh@20 -- # IFS=: 00:05:56.483 11:09:37 -- accel/accel.sh@20 -- # read -r var val 00:05:56.483 11:09:37 -- accel/accel.sh@21 -- # val= 00:05:56.483 11:09:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.483 11:09:37 -- accel/accel.sh@20 -- # IFS=: 00:05:56.483 11:09:37 -- accel/accel.sh@20 -- # read -r var val 00:05:56.483 11:09:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:56.483 11:09:37 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:56.483 11:09:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.483 00:05:56.483 real 0m2.723s 00:05:56.483 user 0m2.384s 00:05:56.483 sys 0m0.137s 00:05:56.483 11:09:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.483 ************************************ 00:05:56.483 END TEST accel_copy_crc32c 00:05:56.483 ************************************ 00:05:56.483 11:09:37 -- common/autotest_common.sh@10 -- # set +x 00:05:56.484 11:09:37 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:56.484 11:09:37 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:56.484 11:09:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.484 11:09:37 -- common/autotest_common.sh@10 -- # set +x 00:05:56.484 ************************************ 00:05:56.484 START TEST accel_copy_crc32c_C2 00:05:56.484 ************************************ 00:05:56.484 11:09:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:56.484 11:09:37 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.484 11:09:37 -- accel/accel.sh@17 -- # local accel_module 00:05:56.484 11:09:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:56.484 11:09:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:56.484 11:09:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.484 11:09:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.484 11:09:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.484 11:09:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.484 11:09:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.484 11:09:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.484 11:09:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.484 11:09:37 -- accel/accel.sh@42 -- # jq -r . 00:05:56.484 [2024-10-13 11:09:37.844869] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:56.484 [2024-10-13 11:09:37.845114] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56372 ] 00:05:56.484 [2024-10-13 11:09:37.982362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.484 [2024-10-13 11:09:38.031130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.863 11:09:39 -- accel/accel.sh@18 -- # out=' 00:05:57.863 SPDK Configuration: 00:05:57.863 Core mask: 0x1 00:05:57.863 00:05:57.863 Accel Perf Configuration: 00:05:57.863 Workload Type: copy_crc32c 00:05:57.863 CRC-32C seed: 0 00:05:57.863 Vector size: 4096 bytes 00:05:57.863 Transfer size: 8192 bytes 00:05:57.863 Vector count 2 00:05:57.863 Module: software 00:05:57.863 Queue depth: 32 00:05:57.863 Allocate depth: 32 00:05:57.863 # threads/core: 1 00:05:57.863 Run time: 1 seconds 00:05:57.863 Verify: Yes 00:05:57.863 00:05:57.863 Running for 1 seconds... 00:05:57.863 00:05:57.863 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:57.863 ------------------------------------------------------------------------------------ 00:05:57.863 0,0 198368/s 1549 MiB/s 0 0 00:05:57.863 ==================================================================================== 00:05:57.863 Total 198368/s 774 MiB/s 0 0' 00:05:57.863 11:09:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:57.863 11:09:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.863 11:09:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.863 11:09:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.863 11:09:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.863 11:09:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.863 11:09:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.863 11:09:39 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.863 11:09:39 -- accel/accel.sh@42 -- # jq -r . 00:05:57.863 [2024-10-13 11:09:39.230246] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:57.863 [2024-10-13 11:09:39.230370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56392 ] 00:05:57.863 [2024-10-13 11:09:39.363673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.863 [2024-10-13 11:09:39.413828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val= 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val= 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val=0x1 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val= 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val= 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val=0 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val= 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val=software 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@23 -- # accel_module=software 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val=32 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val=32 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val=1 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val=Yes 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val= 00:05:57.863 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:57.863 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:57.863 11:09:39 -- accel/accel.sh@21 -- # val= 00:05:58.122 11:09:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.122 11:09:39 -- accel/accel.sh@20 -- # IFS=: 00:05:58.122 11:09:39 -- accel/accel.sh@20 -- # read -r var val 00:05:59.063 11:09:40 -- accel/accel.sh@21 -- # val= 00:05:59.063 11:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.063 11:09:40 -- accel/accel.sh@20 -- # IFS=: 00:05:59.063 11:09:40 -- accel/accel.sh@20 -- # read -r var val 00:05:59.063 11:09:40 -- accel/accel.sh@21 -- # val= 00:05:59.063 11:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.063 11:09:40 -- accel/accel.sh@20 -- # IFS=: 00:05:59.063 11:09:40 -- accel/accel.sh@20 -- # read -r var val 00:05:59.063 11:09:40 -- accel/accel.sh@21 -- # val= 00:05:59.063 11:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.063 11:09:40 -- accel/accel.sh@20 -- # IFS=: 00:05:59.063 11:09:40 -- accel/accel.sh@20 -- # read -r var val 00:05:59.063 11:09:40 -- accel/accel.sh@21 -- # val= 00:05:59.063 11:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.063 11:09:40 -- accel/accel.sh@20 -- # IFS=: 00:05:59.063 11:09:40 -- accel/accel.sh@20 -- # read -r var val 00:05:59.063 11:09:40 -- accel/accel.sh@21 -- # val= 00:05:59.063 11:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.063 11:09:40 -- accel/accel.sh@20 -- # IFS=: 00:05:59.063 11:09:40 -- accel/accel.sh@20 -- # read -r var val 00:05:59.063 11:09:40 -- accel/accel.sh@21 -- # val= 00:05:59.063 11:09:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.063 11:09:40 -- accel/accel.sh@20 -- # IFS=: 00:05:59.063 11:09:40 -- accel/accel.sh@20 -- # read -r var val 00:05:59.063 11:09:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:59.063 11:09:40 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:59.063 11:09:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.063 00:05:59.063 real 0m2.755s 00:05:59.063 user 0m2.419s 00:05:59.063 sys 0m0.136s 00:05:59.063 11:09:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.063 11:09:40 -- common/autotest_common.sh@10 -- # set +x 00:05:59.063 ************************************ 00:05:59.063 END TEST accel_copy_crc32c_C2 00:05:59.063 ************************************ 00:05:59.063 11:09:40 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:59.063 11:09:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:59.063 11:09:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.063 11:09:40 -- common/autotest_common.sh@10 -- # set +x 00:05:59.063 ************************************ 00:05:59.063 START TEST accel_dualcast 00:05:59.063 ************************************ 00:05:59.063 11:09:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:05:59.063 11:09:40 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.063 11:09:40 -- accel/accel.sh@17 -- # local accel_module 00:05:59.063 11:09:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:59.063 11:09:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:59.063 11:09:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.063 11:09:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.063 11:09:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.063 11:09:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.063 11:09:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.063 11:09:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.063 11:09:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.063 11:09:40 -- accel/accel.sh@42 -- # jq -r . 00:05:59.063 [2024-10-13 11:09:40.652932] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:59.063 [2024-10-13 11:09:40.653023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56421 ] 00:05:59.322 [2024-10-13 11:09:40.791588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.322 [2024-10-13 11:09:40.851018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.699 11:09:42 -- accel/accel.sh@18 -- # out=' 00:06:00.699 SPDK Configuration: 00:06:00.699 Core mask: 0x1 00:06:00.699 00:06:00.699 Accel Perf Configuration: 00:06:00.699 Workload Type: dualcast 00:06:00.699 Transfer size: 4096 bytes 00:06:00.699 Vector count 1 00:06:00.699 Module: software 00:06:00.699 Queue depth: 32 00:06:00.699 Allocate depth: 32 00:06:00.699 # threads/core: 1 00:06:00.699 Run time: 1 seconds 00:06:00.699 Verify: Yes 00:06:00.699 00:06:00.699 Running for 1 seconds... 00:06:00.699 00:06:00.699 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:00.699 ------------------------------------------------------------------------------------ 00:06:00.699 0,0 386592/s 1510 MiB/s 0 0 00:06:00.699 ==================================================================================== 00:06:00.699 Total 386592/s 1510 MiB/s 0 0' 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:00.699 11:09:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.699 11:09:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.699 11:09:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.699 11:09:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.699 11:09:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.699 11:09:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.699 11:09:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.699 11:09:42 -- accel/accel.sh@42 -- # jq -r . 00:06:00.699 [2024-10-13 11:09:42.032004] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:00.699 [2024-10-13 11:09:42.032075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56440 ] 00:06:00.699 [2024-10-13 11:09:42.162240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.699 [2024-10-13 11:09:42.210686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val= 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val= 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val=0x1 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val= 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val= 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val=dualcast 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val= 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val=software 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val=32 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val=32 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val=1 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val=Yes 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val= 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.699 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:00.699 11:09:42 -- accel/accel.sh@21 -- # val= 00:06:00.699 11:09:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.700 11:09:42 -- accel/accel.sh@20 -- # IFS=: 00:06:00.700 11:09:42 -- accel/accel.sh@20 -- # read -r var val 00:06:02.100 11:09:43 -- accel/accel.sh@21 -- # val= 00:06:02.100 11:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.100 11:09:43 -- accel/accel.sh@20 -- # IFS=: 00:06:02.100 11:09:43 -- accel/accel.sh@20 -- # read -r var val 00:06:02.100 11:09:43 -- accel/accel.sh@21 -- # val= 00:06:02.100 11:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.100 11:09:43 -- accel/accel.sh@20 -- # IFS=: 00:06:02.100 11:09:43 -- accel/accel.sh@20 -- # read -r var val 00:06:02.100 11:09:43 -- accel/accel.sh@21 -- # val= 00:06:02.100 11:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.100 11:09:43 -- accel/accel.sh@20 -- # IFS=: 00:06:02.100 11:09:43 -- accel/accel.sh@20 -- # read -r var val 00:06:02.100 11:09:43 -- accel/accel.sh@21 -- # val= 00:06:02.100 11:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.100 11:09:43 -- accel/accel.sh@20 -- # IFS=: 00:06:02.100 11:09:43 -- accel/accel.sh@20 -- # read -r var val 00:06:02.100 11:09:43 -- accel/accel.sh@21 -- # val= 00:06:02.100 11:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.100 11:09:43 -- accel/accel.sh@20 -- # IFS=: 00:06:02.100 11:09:43 -- accel/accel.sh@20 -- # read -r var val 00:06:02.100 11:09:43 -- accel/accel.sh@21 -- # val= 00:06:02.100 11:09:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.100 11:09:43 -- accel/accel.sh@20 -- # IFS=: 00:06:02.100 11:09:43 -- accel/accel.sh@20 -- # read -r var val 00:06:02.100 11:09:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:02.100 11:09:43 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:02.100 11:09:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.100 00:06:02.100 real 0m2.734s 00:06:02.100 user 0m2.394s 00:06:02.100 sys 0m0.139s 00:06:02.100 11:09:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.100 ************************************ 00:06:02.100 END TEST accel_dualcast 00:06:02.100 ************************************ 00:06:02.100 11:09:43 -- common/autotest_common.sh@10 -- # set +x 00:06:02.100 11:09:43 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:02.100 11:09:43 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:02.100 11:09:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.100 11:09:43 -- common/autotest_common.sh@10 -- # set +x 00:06:02.100 ************************************ 00:06:02.100 START TEST accel_compare 00:06:02.100 ************************************ 00:06:02.100 11:09:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:02.100 11:09:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.100 11:09:43 -- accel/accel.sh@17 -- # local accel_module 00:06:02.100 11:09:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:02.100 11:09:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:02.100 11:09:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.100 11:09:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.100 11:09:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.100 11:09:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.100 11:09:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.100 11:09:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.100 11:09:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.100 11:09:43 -- accel/accel.sh@42 -- # jq -r . 00:06:02.100 [2024-10-13 11:09:43.435624] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:02.100 [2024-10-13 11:09:43.435722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56475 ] 00:06:02.100 [2024-10-13 11:09:43.563903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.100 [2024-10-13 11:09:43.611882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.478 11:09:44 -- accel/accel.sh@18 -- # out=' 00:06:03.478 SPDK Configuration: 00:06:03.478 Core mask: 0x1 00:06:03.478 00:06:03.478 Accel Perf Configuration: 00:06:03.478 Workload Type: compare 00:06:03.478 Transfer size: 4096 bytes 00:06:03.478 Vector count 1 00:06:03.478 Module: software 00:06:03.478 Queue depth: 32 00:06:03.478 Allocate depth: 32 00:06:03.478 # threads/core: 1 00:06:03.478 Run time: 1 seconds 00:06:03.478 Verify: Yes 00:06:03.478 00:06:03.478 Running for 1 seconds... 00:06:03.478 00:06:03.478 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:03.478 ------------------------------------------------------------------------------------ 00:06:03.478 0,0 523648/s 2045 MiB/s 0 0 00:06:03.478 ==================================================================================== 00:06:03.478 Total 523648/s 2045 MiB/s 0 0' 00:06:03.478 11:09:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:03.478 11:09:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.478 11:09:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.478 11:09:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.478 11:09:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.478 11:09:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.478 11:09:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.478 11:09:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.478 11:09:44 -- accel/accel.sh@42 -- # jq -r . 00:06:03.478 [2024-10-13 11:09:44.775184] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:03.478 [2024-10-13 11:09:44.775267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56489 ] 00:06:03.478 [2024-10-13 11:09:44.903065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.478 [2024-10-13 11:09:44.952089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val= 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val= 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val=0x1 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val= 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val= 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val=compare 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val= 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val=software 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val=32 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val=32 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val=1 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val=Yes 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val= 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:03.478 11:09:44 -- accel/accel.sh@21 -- # val= 00:06:03.478 11:09:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # IFS=: 00:06:03.478 11:09:44 -- accel/accel.sh@20 -- # read -r var val 00:06:04.857 11:09:46 -- accel/accel.sh@21 -- # val= 00:06:04.857 11:09:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.857 11:09:46 -- accel/accel.sh@20 -- # IFS=: 00:06:04.857 11:09:46 -- accel/accel.sh@20 -- # read -r var val 00:06:04.857 11:09:46 -- accel/accel.sh@21 -- # val= 00:06:04.857 11:09:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.857 11:09:46 -- accel/accel.sh@20 -- # IFS=: 00:06:04.857 11:09:46 -- accel/accel.sh@20 -- # read -r var val 00:06:04.857 11:09:46 -- accel/accel.sh@21 -- # val= 00:06:04.857 11:09:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.857 11:09:46 -- accel/accel.sh@20 -- # IFS=: 00:06:04.857 11:09:46 -- accel/accel.sh@20 -- # read -r var val 00:06:04.857 11:09:46 -- accel/accel.sh@21 -- # val= 00:06:04.857 11:09:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.857 11:09:46 -- accel/accel.sh@20 -- # IFS=: 00:06:04.857 11:09:46 -- accel/accel.sh@20 -- # read -r var val 00:06:04.857 11:09:46 -- accel/accel.sh@21 -- # val= 00:06:04.857 11:09:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.857 11:09:46 -- accel/accel.sh@20 -- # IFS=: 00:06:04.857 11:09:46 -- accel/accel.sh@20 -- # read -r var val 00:06:04.857 11:09:46 -- accel/accel.sh@21 -- # val= 00:06:04.857 11:09:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.857 11:09:46 -- accel/accel.sh@20 -- # IFS=: 00:06:04.857 11:09:46 -- accel/accel.sh@20 -- # read -r var val 00:06:04.857 11:09:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:04.857 11:09:46 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:04.857 11:09:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.857 00:06:04.857 real 0m2.702s 00:06:04.857 user 0m2.372s 00:06:04.857 sys 0m0.127s 00:06:04.857 11:09:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.857 ************************************ 00:06:04.857 END TEST accel_compare 00:06:04.857 ************************************ 00:06:04.857 11:09:46 -- common/autotest_common.sh@10 -- # set +x 00:06:04.857 11:09:46 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:04.857 11:09:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:04.857 11:09:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.857 11:09:46 -- common/autotest_common.sh@10 -- # set +x 00:06:04.857 ************************************ 00:06:04.857 START TEST accel_xor 00:06:04.857 ************************************ 00:06:04.857 11:09:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:04.857 11:09:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.857 11:09:46 -- accel/accel.sh@17 -- # local accel_module 00:06:04.857 11:09:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:04.857 11:09:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:04.857 11:09:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.857 11:09:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.857 11:09:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.857 11:09:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.857 11:09:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.857 11:09:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.857 11:09:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.857 11:09:46 -- accel/accel.sh@42 -- # jq -r . 00:06:04.857 [2024-10-13 11:09:46.191130] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:04.857 [2024-10-13 11:09:46.191401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56523 ] 00:06:04.857 [2024-10-13 11:09:46.326506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.857 [2024-10-13 11:09:46.374251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.235 11:09:47 -- accel/accel.sh@18 -- # out=' 00:06:06.235 SPDK Configuration: 00:06:06.235 Core mask: 0x1 00:06:06.235 00:06:06.235 Accel Perf Configuration: 00:06:06.235 Workload Type: xor 00:06:06.235 Source buffers: 2 00:06:06.235 Transfer size: 4096 bytes 00:06:06.235 Vector count 1 00:06:06.235 Module: software 00:06:06.235 Queue depth: 32 00:06:06.235 Allocate depth: 32 00:06:06.235 # threads/core: 1 00:06:06.235 Run time: 1 seconds 00:06:06.235 Verify: Yes 00:06:06.235 00:06:06.235 Running for 1 seconds... 00:06:06.235 00:06:06.235 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:06.235 ------------------------------------------------------------------------------------ 00:06:06.235 0,0 280608/s 1096 MiB/s 0 0 00:06:06.235 ==================================================================================== 00:06:06.235 Total 280608/s 1096 MiB/s 0 0' 00:06:06.235 11:09:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:06.235 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.235 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.235 11:09:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:06.235 11:09:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.235 11:09:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.235 11:09:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.235 11:09:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.235 11:09:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.235 11:09:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.235 11:09:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.235 11:09:47 -- accel/accel.sh@42 -- # jq -r . 00:06:06.235 [2024-10-13 11:09:47.539079] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:06.235 [2024-10-13 11:09:47.539324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56543 ] 00:06:06.235 [2024-10-13 11:09:47.667083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.235 [2024-10-13 11:09:47.718319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.235 11:09:47 -- accel/accel.sh@21 -- # val= 00:06:06.235 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.235 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.235 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.235 11:09:47 -- accel/accel.sh@21 -- # val= 00:06:06.235 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.235 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.235 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.235 11:09:47 -- accel/accel.sh@21 -- # val=0x1 00:06:06.235 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.235 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.235 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.235 11:09:47 -- accel/accel.sh@21 -- # val= 00:06:06.236 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.236 11:09:47 -- accel/accel.sh@21 -- # val= 00:06:06.236 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.236 11:09:47 -- accel/accel.sh@21 -- # val=xor 00:06:06.236 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.236 11:09:47 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.236 11:09:47 -- accel/accel.sh@21 -- # val=2 00:06:06.236 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.236 11:09:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.236 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.236 11:09:47 -- accel/accel.sh@21 -- # val= 00:06:06.236 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.236 11:09:47 -- accel/accel.sh@21 -- # val=software 00:06:06.236 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.236 11:09:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.236 11:09:47 -- accel/accel.sh@21 -- # val=32 00:06:06.236 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.236 11:09:47 -- accel/accel.sh@21 -- # val=32 00:06:06.236 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.236 11:09:47 -- accel/accel.sh@21 -- # val=1 00:06:06.236 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.236 11:09:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:06.236 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.236 11:09:47 -- accel/accel.sh@21 -- # val=Yes 00:06:06.236 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.236 11:09:47 -- accel/accel.sh@21 -- # val= 00:06:06.236 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:06.236 11:09:47 -- accel/accel.sh@21 -- # val= 00:06:06.236 11:09:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # IFS=: 00:06:06.236 11:09:47 -- accel/accel.sh@20 -- # read -r var val 00:06:07.635 11:09:48 -- accel/accel.sh@21 -- # val= 00:06:07.635 11:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.635 11:09:48 -- accel/accel.sh@20 -- # IFS=: 00:06:07.635 11:09:48 -- accel/accel.sh@20 -- # read -r var val 00:06:07.635 11:09:48 -- accel/accel.sh@21 -- # val= 00:06:07.635 11:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.635 11:09:48 -- accel/accel.sh@20 -- # IFS=: 00:06:07.635 11:09:48 -- accel/accel.sh@20 -- # read -r var val 00:06:07.635 11:09:48 -- accel/accel.sh@21 -- # val= 00:06:07.635 11:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.635 11:09:48 -- accel/accel.sh@20 -- # IFS=: 00:06:07.635 11:09:48 -- accel/accel.sh@20 -- # read -r var val 00:06:07.635 11:09:48 -- accel/accel.sh@21 -- # val= 00:06:07.635 11:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.635 11:09:48 -- accel/accel.sh@20 -- # IFS=: 00:06:07.635 11:09:48 -- accel/accel.sh@20 -- # read -r var val 00:06:07.635 11:09:48 -- accel/accel.sh@21 -- # val= 00:06:07.635 11:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.635 11:09:48 -- accel/accel.sh@20 -- # IFS=: 00:06:07.635 ************************************ 00:06:07.635 END TEST accel_xor 00:06:07.635 ************************************ 00:06:07.635 11:09:48 -- accel/accel.sh@20 -- # read -r var val 00:06:07.635 11:09:48 -- accel/accel.sh@21 -- # val= 00:06:07.635 11:09:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.635 11:09:48 -- accel/accel.sh@20 -- # IFS=: 00:06:07.635 11:09:48 -- accel/accel.sh@20 -- # read -r var val 00:06:07.635 11:09:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:07.635 11:09:48 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:07.635 11:09:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.635 00:06:07.635 real 0m2.707s 00:06:07.635 user 0m2.380s 00:06:07.635 sys 0m0.128s 00:06:07.635 11:09:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.635 11:09:48 -- common/autotest_common.sh@10 -- # set +x 00:06:07.635 11:09:48 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:07.635 11:09:48 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:07.635 11:09:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.635 11:09:48 -- common/autotest_common.sh@10 -- # set +x 00:06:07.635 ************************************ 00:06:07.635 START TEST accel_xor 00:06:07.635 ************************************ 00:06:07.635 11:09:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:07.635 11:09:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.635 11:09:48 -- accel/accel.sh@17 -- # local accel_module 00:06:07.635 11:09:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:07.635 11:09:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:07.635 11:09:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.635 11:09:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.635 11:09:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.635 11:09:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.635 11:09:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.635 11:09:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.635 11:09:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.635 11:09:48 -- accel/accel.sh@42 -- # jq -r . 00:06:07.635 [2024-10-13 11:09:48.944741] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:07.635 [2024-10-13 11:09:48.944831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56573 ] 00:06:07.635 [2024-10-13 11:09:49.081774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.635 [2024-10-13 11:09:49.133451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.013 11:09:50 -- accel/accel.sh@18 -- # out=' 00:06:09.013 SPDK Configuration: 00:06:09.013 Core mask: 0x1 00:06:09.013 00:06:09.013 Accel Perf Configuration: 00:06:09.013 Workload Type: xor 00:06:09.013 Source buffers: 3 00:06:09.013 Transfer size: 4096 bytes 00:06:09.013 Vector count 1 00:06:09.013 Module: software 00:06:09.013 Queue depth: 32 00:06:09.013 Allocate depth: 32 00:06:09.013 # threads/core: 1 00:06:09.013 Run time: 1 seconds 00:06:09.013 Verify: Yes 00:06:09.013 00:06:09.013 Running for 1 seconds... 00:06:09.013 00:06:09.013 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:09.013 ------------------------------------------------------------------------------------ 00:06:09.013 0,0 267360/s 1044 MiB/s 0 0 00:06:09.013 ==================================================================================== 00:06:09.013 Total 267360/s 1044 MiB/s 0 0' 00:06:09.013 11:09:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:09.013 11:09:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.013 11:09:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.013 11:09:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.013 11:09:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.013 11:09:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.013 11:09:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.013 11:09:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.013 11:09:50 -- accel/accel.sh@42 -- # jq -r . 00:06:09.013 [2024-10-13 11:09:50.308182] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:09.013 [2024-10-13 11:09:50.308266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56598 ] 00:06:09.013 [2024-10-13 11:09:50.435991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.013 [2024-10-13 11:09:50.483848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val= 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val= 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val=0x1 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val= 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val= 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val=xor 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val=3 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val= 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val=software 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val=32 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val=32 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val=1 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val=Yes 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val= 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:09.013 11:09:50 -- accel/accel.sh@21 -- # val= 00:06:09.013 11:09:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # IFS=: 00:06:09.013 11:09:50 -- accel/accel.sh@20 -- # read -r var val 00:06:10.391 11:09:51 -- accel/accel.sh@21 -- # val= 00:06:10.391 11:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.391 11:09:51 -- accel/accel.sh@20 -- # IFS=: 00:06:10.391 11:09:51 -- accel/accel.sh@20 -- # read -r var val 00:06:10.391 11:09:51 -- accel/accel.sh@21 -- # val= 00:06:10.391 11:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.391 11:09:51 -- accel/accel.sh@20 -- # IFS=: 00:06:10.391 11:09:51 -- accel/accel.sh@20 -- # read -r var val 00:06:10.391 11:09:51 -- accel/accel.sh@21 -- # val= 00:06:10.391 11:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.391 11:09:51 -- accel/accel.sh@20 -- # IFS=: 00:06:10.391 11:09:51 -- accel/accel.sh@20 -- # read -r var val 00:06:10.391 11:09:51 -- accel/accel.sh@21 -- # val= 00:06:10.391 11:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.391 11:09:51 -- accel/accel.sh@20 -- # IFS=: 00:06:10.391 11:09:51 -- accel/accel.sh@20 -- # read -r var val 00:06:10.391 11:09:51 -- accel/accel.sh@21 -- # val= 00:06:10.391 11:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.391 11:09:51 -- accel/accel.sh@20 -- # IFS=: 00:06:10.391 11:09:51 -- accel/accel.sh@20 -- # read -r var val 00:06:10.391 11:09:51 -- accel/accel.sh@21 -- # val= 00:06:10.391 11:09:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.391 11:09:51 -- accel/accel.sh@20 -- # IFS=: 00:06:10.391 11:09:51 -- accel/accel.sh@20 -- # read -r var val 00:06:10.391 11:09:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:10.391 11:09:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:10.391 11:09:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.391 00:06:10.391 real 0m2.715s 00:06:10.391 user 0m2.384s 00:06:10.391 sys 0m0.130s 00:06:10.391 11:09:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.391 11:09:51 -- common/autotest_common.sh@10 -- # set +x 00:06:10.391 ************************************ 00:06:10.391 END TEST accel_xor 00:06:10.391 ************************************ 00:06:10.391 11:09:51 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:10.391 11:09:51 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:10.391 11:09:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.391 11:09:51 -- common/autotest_common.sh@10 -- # set +x 00:06:10.391 ************************************ 00:06:10.391 START TEST accel_dif_verify 00:06:10.391 ************************************ 00:06:10.391 11:09:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:10.391 11:09:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.391 11:09:51 -- accel/accel.sh@17 -- # local accel_module 00:06:10.391 11:09:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:10.391 11:09:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:10.391 11:09:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.391 11:09:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.391 11:09:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.391 11:09:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.391 11:09:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.391 11:09:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.391 11:09:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.391 11:09:51 -- accel/accel.sh@42 -- # jq -r . 00:06:10.391 [2024-10-13 11:09:51.714073] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:10.391 [2024-10-13 11:09:51.714161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56627 ] 00:06:10.391 [2024-10-13 11:09:51.851551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.391 [2024-10-13 11:09:51.903463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.770 11:09:53 -- accel/accel.sh@18 -- # out=' 00:06:11.770 SPDK Configuration: 00:06:11.770 Core mask: 0x1 00:06:11.770 00:06:11.770 Accel Perf Configuration: 00:06:11.770 Workload Type: dif_verify 00:06:11.770 Vector size: 4096 bytes 00:06:11.770 Transfer size: 4096 bytes 00:06:11.770 Block size: 512 bytes 00:06:11.770 Metadata size: 8 bytes 00:06:11.770 Vector count 1 00:06:11.770 Module: software 00:06:11.770 Queue depth: 32 00:06:11.770 Allocate depth: 32 00:06:11.770 # threads/core: 1 00:06:11.770 Run time: 1 seconds 00:06:11.770 Verify: No 00:06:11.770 00:06:11.770 Running for 1 seconds... 00:06:11.770 00:06:11.770 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:11.770 ------------------------------------------------------------------------------------ 00:06:11.770 0,0 117024/s 464 MiB/s 0 0 00:06:11.770 ==================================================================================== 00:06:11.770 Total 117024/s 457 MiB/s 0 0' 00:06:11.770 11:09:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:11.770 11:09:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.770 11:09:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.770 11:09:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.770 11:09:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.770 11:09:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.770 11:09:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.770 11:09:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.770 11:09:53 -- accel/accel.sh@42 -- # jq -r . 00:06:11.770 [2024-10-13 11:09:53.077134] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:11.770 [2024-10-13 11:09:53.077224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56641 ] 00:06:11.770 [2024-10-13 11:09:53.212767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.770 [2024-10-13 11:09:53.261999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val= 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val= 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val=0x1 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val= 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val= 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val=dif_verify 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val= 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val=software 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val=32 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val=32 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val=1 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val=No 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val= 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:11.770 11:09:53 -- accel/accel.sh@21 -- # val= 00:06:11.770 11:09:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # IFS=: 00:06:11.770 11:09:53 -- accel/accel.sh@20 -- # read -r var val 00:06:13.149 11:09:54 -- accel/accel.sh@21 -- # val= 00:06:13.149 11:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.149 11:09:54 -- accel/accel.sh@20 -- # IFS=: 00:06:13.149 11:09:54 -- accel/accel.sh@20 -- # read -r var val 00:06:13.149 11:09:54 -- accel/accel.sh@21 -- # val= 00:06:13.149 11:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.149 11:09:54 -- accel/accel.sh@20 -- # IFS=: 00:06:13.149 11:09:54 -- accel/accel.sh@20 -- # read -r var val 00:06:13.149 11:09:54 -- accel/accel.sh@21 -- # val= 00:06:13.149 11:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.149 11:09:54 -- accel/accel.sh@20 -- # IFS=: 00:06:13.149 11:09:54 -- accel/accel.sh@20 -- # read -r var val 00:06:13.149 11:09:54 -- accel/accel.sh@21 -- # val= 00:06:13.149 11:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.149 11:09:54 -- accel/accel.sh@20 -- # IFS=: 00:06:13.149 11:09:54 -- accel/accel.sh@20 -- # read -r var val 00:06:13.149 11:09:54 -- accel/accel.sh@21 -- # val= 00:06:13.149 11:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.149 11:09:54 -- accel/accel.sh@20 -- # IFS=: 00:06:13.149 11:09:54 -- accel/accel.sh@20 -- # read -r var val 00:06:13.149 11:09:54 -- accel/accel.sh@21 -- # val= 00:06:13.149 11:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.149 11:09:54 -- accel/accel.sh@20 -- # IFS=: 00:06:13.149 11:09:54 -- accel/accel.sh@20 -- # read -r var val 00:06:13.149 11:09:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:13.149 11:09:54 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:13.149 11:09:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.149 00:06:13.149 real 0m2.733s 00:06:13.149 user 0m2.397s 00:06:13.149 sys 0m0.137s 00:06:13.149 11:09:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.149 ************************************ 00:06:13.149 END TEST accel_dif_verify 00:06:13.149 ************************************ 00:06:13.149 11:09:54 -- common/autotest_common.sh@10 -- # set +x 00:06:13.149 11:09:54 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:13.149 11:09:54 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:13.149 11:09:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.149 11:09:54 -- common/autotest_common.sh@10 -- # set +x 00:06:13.149 ************************************ 00:06:13.149 START TEST accel_dif_generate 00:06:13.149 ************************************ 00:06:13.149 11:09:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:13.149 11:09:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.149 11:09:54 -- accel/accel.sh@17 -- # local accel_module 00:06:13.150 11:09:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:13.150 11:09:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:13.150 11:09:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.150 11:09:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.150 11:09:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.150 11:09:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.150 11:09:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.150 11:09:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.150 11:09:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.150 11:09:54 -- accel/accel.sh@42 -- # jq -r . 00:06:13.150 [2024-10-13 11:09:54.491982] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:13.150 [2024-10-13 11:09:54.492064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56681 ] 00:06:13.150 [2024-10-13 11:09:54.619284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.150 [2024-10-13 11:09:54.668702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.551 11:09:55 -- accel/accel.sh@18 -- # out=' 00:06:14.551 SPDK Configuration: 00:06:14.551 Core mask: 0x1 00:06:14.551 00:06:14.551 Accel Perf Configuration: 00:06:14.551 Workload Type: dif_generate 00:06:14.551 Vector size: 4096 bytes 00:06:14.551 Transfer size: 4096 bytes 00:06:14.551 Block size: 512 bytes 00:06:14.551 Metadata size: 8 bytes 00:06:14.551 Vector count 1 00:06:14.551 Module: software 00:06:14.551 Queue depth: 32 00:06:14.551 Allocate depth: 32 00:06:14.551 # threads/core: 1 00:06:14.551 Run time: 1 seconds 00:06:14.551 Verify: No 00:06:14.551 00:06:14.551 Running for 1 seconds... 00:06:14.551 00:06:14.551 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:14.551 ------------------------------------------------------------------------------------ 00:06:14.551 0,0 141824/s 562 MiB/s 0 0 00:06:14.551 ==================================================================================== 00:06:14.551 Total 141824/s 554 MiB/s 0 0' 00:06:14.551 11:09:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:14.551 11:09:55 -- accel/accel.sh@20 -- # IFS=: 00:06:14.551 11:09:55 -- accel/accel.sh@20 -- # read -r var val 00:06:14.551 11:09:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:14.551 11:09:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.551 11:09:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.551 11:09:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.551 11:09:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.551 11:09:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.551 11:09:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.551 11:09:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.551 11:09:55 -- accel/accel.sh@42 -- # jq -r . 00:06:14.551 [2024-10-13 11:09:55.831564] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:14.551 [2024-10-13 11:09:55.831634] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56695 ] 00:06:14.551 [2024-10-13 11:09:55.958280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.551 [2024-10-13 11:09:56.007795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.551 11:09:56 -- accel/accel.sh@21 -- # val= 00:06:14.551 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.551 11:09:56 -- accel/accel.sh@21 -- # val= 00:06:14.551 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.551 11:09:56 -- accel/accel.sh@21 -- # val=0x1 00:06:14.551 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.551 11:09:56 -- accel/accel.sh@21 -- # val= 00:06:14.551 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.551 11:09:56 -- accel/accel.sh@21 -- # val= 00:06:14.551 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.551 11:09:56 -- accel/accel.sh@21 -- # val=dif_generate 00:06:14.551 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.551 11:09:56 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.551 11:09:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.551 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.551 11:09:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.551 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.551 11:09:56 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:14.551 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.551 11:09:56 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:14.551 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.551 11:09:56 -- accel/accel.sh@21 -- # val= 00:06:14.551 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.551 11:09:56 -- accel/accel.sh@21 -- # val=software 00:06:14.551 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.551 11:09:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:14.551 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.552 11:09:56 -- accel/accel.sh@21 -- # val=32 00:06:14.552 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.552 11:09:56 -- accel/accel.sh@21 -- # val=32 00:06:14.552 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.552 11:09:56 -- accel/accel.sh@21 -- # val=1 00:06:14.552 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.552 11:09:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:14.552 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.552 11:09:56 -- accel/accel.sh@21 -- # val=No 00:06:14.552 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.552 11:09:56 -- accel/accel.sh@21 -- # val= 00:06:14.552 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:14.552 11:09:56 -- accel/accel.sh@21 -- # val= 00:06:14.552 11:09:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # IFS=: 00:06:14.552 11:09:56 -- accel/accel.sh@20 -- # read -r var val 00:06:15.938 11:09:57 -- accel/accel.sh@21 -- # val= 00:06:15.938 11:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.938 11:09:57 -- accel/accel.sh@20 -- # IFS=: 00:06:15.938 11:09:57 -- accel/accel.sh@20 -- # read -r var val 00:06:15.938 11:09:57 -- accel/accel.sh@21 -- # val= 00:06:15.938 11:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.938 11:09:57 -- accel/accel.sh@20 -- # IFS=: 00:06:15.938 11:09:57 -- accel/accel.sh@20 -- # read -r var val 00:06:15.938 11:09:57 -- accel/accel.sh@21 -- # val= 00:06:15.938 11:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.938 11:09:57 -- accel/accel.sh@20 -- # IFS=: 00:06:15.938 11:09:57 -- accel/accel.sh@20 -- # read -r var val 00:06:15.938 11:09:57 -- accel/accel.sh@21 -- # val= 00:06:15.938 11:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.938 11:09:57 -- accel/accel.sh@20 -- # IFS=: 00:06:15.938 11:09:57 -- accel/accel.sh@20 -- # read -r var val 00:06:15.938 11:09:57 -- accel/accel.sh@21 -- # val= 00:06:15.938 11:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.938 11:09:57 -- accel/accel.sh@20 -- # IFS=: 00:06:15.938 11:09:57 -- accel/accel.sh@20 -- # read -r var val 00:06:15.938 11:09:57 -- accel/accel.sh@21 -- # val= 00:06:15.938 11:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.938 11:09:57 -- accel/accel.sh@20 -- # IFS=: 00:06:15.938 11:09:57 -- accel/accel.sh@20 -- # read -r var val 00:06:15.938 11:09:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:15.938 11:09:57 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:15.938 11:09:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.938 00:06:15.938 real 0m2.699s 00:06:15.938 user 0m2.365s 00:06:15.938 sys 0m0.137s 00:06:15.938 11:09:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.938 11:09:57 -- common/autotest_common.sh@10 -- # set +x 00:06:15.938 ************************************ 00:06:15.938 END TEST accel_dif_generate 00:06:15.938 ************************************ 00:06:15.938 11:09:57 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:15.938 11:09:57 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:15.938 11:09:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.938 11:09:57 -- common/autotest_common.sh@10 -- # set +x 00:06:15.938 ************************************ 00:06:15.938 START TEST accel_dif_generate_copy 00:06:15.938 ************************************ 00:06:15.938 11:09:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:15.938 11:09:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.938 11:09:57 -- accel/accel.sh@17 -- # local accel_module 00:06:15.938 11:09:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:15.938 11:09:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:15.938 11:09:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.938 11:09:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.938 11:09:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.938 11:09:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.938 11:09:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.938 11:09:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.938 11:09:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.938 11:09:57 -- accel/accel.sh@42 -- # jq -r . 00:06:15.938 [2024-10-13 11:09:57.245595] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:15.938 [2024-10-13 11:09:57.245695] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56724 ] 00:06:15.938 [2024-10-13 11:09:57.372875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.938 [2024-10-13 11:09:57.423391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.315 11:09:58 -- accel/accel.sh@18 -- # out=' 00:06:17.315 SPDK Configuration: 00:06:17.315 Core mask: 0x1 00:06:17.315 00:06:17.315 Accel Perf Configuration: 00:06:17.315 Workload Type: dif_generate_copy 00:06:17.315 Vector size: 4096 bytes 00:06:17.315 Transfer size: 4096 bytes 00:06:17.315 Vector count 1 00:06:17.315 Module: software 00:06:17.315 Queue depth: 32 00:06:17.315 Allocate depth: 32 00:06:17.315 # threads/core: 1 00:06:17.315 Run time: 1 seconds 00:06:17.315 Verify: No 00:06:17.315 00:06:17.315 Running for 1 seconds... 00:06:17.315 00:06:17.315 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:17.315 ------------------------------------------------------------------------------------ 00:06:17.315 0,0 109440/s 434 MiB/s 0 0 00:06:17.315 ==================================================================================== 00:06:17.315 Total 109440/s 427 MiB/s 0 0' 00:06:17.315 11:09:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:17.315 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.315 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.315 11:09:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:17.315 11:09:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.315 11:09:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.315 11:09:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.315 11:09:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.315 11:09:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.315 11:09:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.316 11:09:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.316 11:09:58 -- accel/accel.sh@42 -- # jq -r . 00:06:17.316 [2024-10-13 11:09:58.591982] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:17.316 [2024-10-13 11:09:58.592067] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56744 ] 00:06:17.316 [2024-10-13 11:09:58.721425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.316 [2024-10-13 11:09:58.769153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val= 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val= 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val=0x1 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val= 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val= 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val= 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val=software 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val=32 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val=32 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val=1 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val=No 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val= 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.316 11:09:58 -- accel/accel.sh@21 -- # val= 00:06:17.316 11:09:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.316 11:09:58 -- accel/accel.sh@20 -- # read -r var val 00:06:18.693 11:09:59 -- accel/accel.sh@21 -- # val= 00:06:18.693 11:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.693 11:09:59 -- accel/accel.sh@20 -- # IFS=: 00:06:18.693 11:09:59 -- accel/accel.sh@20 -- # read -r var val 00:06:18.693 11:09:59 -- accel/accel.sh@21 -- # val= 00:06:18.693 11:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.693 11:09:59 -- accel/accel.sh@20 -- # IFS=: 00:06:18.693 11:09:59 -- accel/accel.sh@20 -- # read -r var val 00:06:18.693 11:09:59 -- accel/accel.sh@21 -- # val= 00:06:18.693 11:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.693 11:09:59 -- accel/accel.sh@20 -- # IFS=: 00:06:18.693 11:09:59 -- accel/accel.sh@20 -- # read -r var val 00:06:18.693 11:09:59 -- accel/accel.sh@21 -- # val= 00:06:18.693 11:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.693 11:09:59 -- accel/accel.sh@20 -- # IFS=: 00:06:18.693 11:09:59 -- accel/accel.sh@20 -- # read -r var val 00:06:18.693 11:09:59 -- accel/accel.sh@21 -- # val= 00:06:18.693 11:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.693 11:09:59 -- accel/accel.sh@20 -- # IFS=: 00:06:18.693 11:09:59 -- accel/accel.sh@20 -- # read -r var val 00:06:18.693 11:09:59 -- accel/accel.sh@21 -- # val= 00:06:18.693 11:09:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.693 11:09:59 -- accel/accel.sh@20 -- # IFS=: 00:06:18.693 11:09:59 -- accel/accel.sh@20 -- # read -r var val 00:06:18.693 11:09:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:18.693 11:09:59 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:18.693 11:09:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.693 00:06:18.693 real 0m2.704s 00:06:18.693 user 0m2.375s 00:06:18.693 sys 0m0.132s 00:06:18.693 11:09:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.693 11:09:59 -- common/autotest_common.sh@10 -- # set +x 00:06:18.693 ************************************ 00:06:18.693 END TEST accel_dif_generate_copy 00:06:18.693 ************************************ 00:06:18.693 11:09:59 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:18.693 11:09:59 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:18.693 11:09:59 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:18.693 11:09:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.693 11:09:59 -- common/autotest_common.sh@10 -- # set +x 00:06:18.693 ************************************ 00:06:18.693 START TEST accel_comp 00:06:18.693 ************************************ 00:06:18.693 11:09:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:18.693 11:09:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.693 11:09:59 -- accel/accel.sh@17 -- # local accel_module 00:06:18.693 11:09:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:18.693 11:09:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:18.693 11:09:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.693 11:09:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.693 11:09:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.693 11:09:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.693 11:09:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.693 11:09:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.693 11:09:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.693 11:09:59 -- accel/accel.sh@42 -- # jq -r . 00:06:18.693 [2024-10-13 11:09:59.999825] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:18.693 [2024-10-13 11:09:59.999933] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56778 ] 00:06:18.693 [2024-10-13 11:10:00.138379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.693 [2024-10-13 11:10:00.185818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.071 11:10:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:20.071 00:06:20.071 SPDK Configuration: 00:06:20.071 Core mask: 0x1 00:06:20.071 00:06:20.071 Accel Perf Configuration: 00:06:20.071 Workload Type: compress 00:06:20.071 Transfer size: 4096 bytes 00:06:20.071 Vector count 1 00:06:20.071 Module: software 00:06:20.071 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.071 Queue depth: 32 00:06:20.071 Allocate depth: 32 00:06:20.071 # threads/core: 1 00:06:20.071 Run time: 1 seconds 00:06:20.071 Verify: No 00:06:20.071 00:06:20.071 Running for 1 seconds... 00:06:20.071 00:06:20.071 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.071 ------------------------------------------------------------------------------------ 00:06:20.071 0,0 56608/s 235 MiB/s 0 0 00:06:20.071 ==================================================================================== 00:06:20.071 Total 56608/s 221 MiB/s 0 0' 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.071 11:10:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.071 11:10:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.071 11:10:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.071 11:10:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.071 11:10:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.071 11:10:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.071 11:10:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.071 11:10:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.071 11:10:01 -- accel/accel.sh@42 -- # jq -r . 00:06:20.071 [2024-10-13 11:10:01.374698] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:20.071 [2024-10-13 11:10:01.374788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56792 ] 00:06:20.071 [2024-10-13 11:10:01.505577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.071 [2024-10-13 11:10:01.561229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val= 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val= 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val= 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val=0x1 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val= 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val= 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val=compress 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val= 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val=software 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val=32 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val=32 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val=1 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val=No 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val= 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.071 11:10:01 -- accel/accel.sh@21 -- # val= 00:06:20.071 11:10:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.071 11:10:01 -- accel/accel.sh@20 -- # read -r var val 00:06:21.449 11:10:02 -- accel/accel.sh@21 -- # val= 00:06:21.449 11:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.449 11:10:02 -- accel/accel.sh@20 -- # IFS=: 00:06:21.449 11:10:02 -- accel/accel.sh@20 -- # read -r var val 00:06:21.449 11:10:02 -- accel/accel.sh@21 -- # val= 00:06:21.449 11:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.449 11:10:02 -- accel/accel.sh@20 -- # IFS=: 00:06:21.449 11:10:02 -- accel/accel.sh@20 -- # read -r var val 00:06:21.449 11:10:02 -- accel/accel.sh@21 -- # val= 00:06:21.449 11:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.449 11:10:02 -- accel/accel.sh@20 -- # IFS=: 00:06:21.449 11:10:02 -- accel/accel.sh@20 -- # read -r var val 00:06:21.449 11:10:02 -- accel/accel.sh@21 -- # val= 00:06:21.449 11:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.449 11:10:02 -- accel/accel.sh@20 -- # IFS=: 00:06:21.449 11:10:02 -- accel/accel.sh@20 -- # read -r var val 00:06:21.449 11:10:02 -- accel/accel.sh@21 -- # val= 00:06:21.449 11:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.449 11:10:02 -- accel/accel.sh@20 -- # IFS=: 00:06:21.449 11:10:02 -- accel/accel.sh@20 -- # read -r var val 00:06:21.449 11:10:02 -- accel/accel.sh@21 -- # val= 00:06:21.449 11:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.449 11:10:02 -- accel/accel.sh@20 -- # IFS=: 00:06:21.449 11:10:02 -- accel/accel.sh@20 -- # read -r var val 00:06:21.449 11:10:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:21.449 11:10:02 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:21.449 11:10:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.449 00:06:21.449 real 0m2.743s 00:06:21.449 user 0m2.404s 00:06:21.449 sys 0m0.136s 00:06:21.449 ************************************ 00:06:21.449 END TEST accel_comp 00:06:21.449 ************************************ 00:06:21.449 11:10:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.449 11:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:21.449 11:10:02 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.449 11:10:02 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:21.449 11:10:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.449 11:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:21.449 ************************************ 00:06:21.449 START TEST accel_decomp 00:06:21.449 ************************************ 00:06:21.449 11:10:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.449 11:10:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.449 11:10:02 -- accel/accel.sh@17 -- # local accel_module 00:06:21.449 11:10:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.449 11:10:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.449 11:10:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.449 11:10:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.449 11:10:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.449 11:10:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.449 11:10:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.449 11:10:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.449 11:10:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.449 11:10:02 -- accel/accel.sh@42 -- # jq -r . 00:06:21.449 [2024-10-13 11:10:02.792950] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:21.449 [2024-10-13 11:10:02.793043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56827 ] 00:06:21.449 [2024-10-13 11:10:02.929659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.449 [2024-10-13 11:10:02.977807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.827 11:10:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:22.827 00:06:22.827 SPDK Configuration: 00:06:22.827 Core mask: 0x1 00:06:22.827 00:06:22.827 Accel Perf Configuration: 00:06:22.827 Workload Type: decompress 00:06:22.827 Transfer size: 4096 bytes 00:06:22.827 Vector count 1 00:06:22.827 Module: software 00:06:22.827 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:22.827 Queue depth: 32 00:06:22.827 Allocate depth: 32 00:06:22.827 # threads/core: 1 00:06:22.827 Run time: 1 seconds 00:06:22.827 Verify: Yes 00:06:22.827 00:06:22.827 Running for 1 seconds... 00:06:22.827 00:06:22.827 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.827 ------------------------------------------------------------------------------------ 00:06:22.827 0,0 79072/s 145 MiB/s 0 0 00:06:22.827 ==================================================================================== 00:06:22.827 Total 79072/s 308 MiB/s 0 0' 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:22.827 11:10:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.827 11:10:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.827 11:10:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.827 11:10:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.827 11:10:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.827 11:10:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.827 11:10:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.827 11:10:04 -- accel/accel.sh@42 -- # jq -r . 00:06:22.827 [2024-10-13 11:10:04.144954] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:22.827 [2024-10-13 11:10:04.145056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56846 ] 00:06:22.827 [2024-10-13 11:10:04.274596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.827 [2024-10-13 11:10:04.322098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val= 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val= 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val= 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val=0x1 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val= 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val= 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val=decompress 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val= 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val=software 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val=32 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val=32 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val=1 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val=Yes 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val= 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:22.827 11:10:04 -- accel/accel.sh@21 -- # val= 00:06:22.827 11:10:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:22.827 11:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:24.205 11:10:05 -- accel/accel.sh@21 -- # val= 00:06:24.205 11:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.205 11:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:24.205 11:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:24.205 11:10:05 -- accel/accel.sh@21 -- # val= 00:06:24.205 11:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.205 11:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:24.205 11:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:24.205 11:10:05 -- accel/accel.sh@21 -- # val= 00:06:24.205 11:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.205 11:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:24.205 11:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:24.205 11:10:05 -- accel/accel.sh@21 -- # val= 00:06:24.205 11:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.205 11:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:24.205 11:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:24.205 11:10:05 -- accel/accel.sh@21 -- # val= 00:06:24.205 11:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.205 11:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:24.205 11:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:24.205 11:10:05 -- accel/accel.sh@21 -- # val= 00:06:24.205 11:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.205 11:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:24.205 11:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:24.205 11:10:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.205 11:10:05 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:24.205 11:10:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.205 00:06:24.205 real 0m2.719s 00:06:24.205 user 0m2.388s 00:06:24.205 sys 0m0.128s 00:06:24.205 11:10:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.205 11:10:05 -- common/autotest_common.sh@10 -- # set +x 00:06:24.205 ************************************ 00:06:24.205 END TEST accel_decomp 00:06:24.205 ************************************ 00:06:24.205 11:10:05 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:24.205 11:10:05 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:24.205 11:10:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.205 11:10:05 -- common/autotest_common.sh@10 -- # set +x 00:06:24.205 ************************************ 00:06:24.205 START TEST accel_decmop_full 00:06:24.205 ************************************ 00:06:24.205 11:10:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:24.205 11:10:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.205 11:10:05 -- accel/accel.sh@17 -- # local accel_module 00:06:24.205 11:10:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:24.205 11:10:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:24.205 11:10:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.205 11:10:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.205 11:10:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.205 11:10:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.205 11:10:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.205 11:10:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.205 11:10:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.205 11:10:05 -- accel/accel.sh@42 -- # jq -r . 00:06:24.205 [2024-10-13 11:10:05.560955] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:24.205 [2024-10-13 11:10:05.561069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56875 ] 00:06:24.205 [2024-10-13 11:10:05.698247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.205 [2024-10-13 11:10:05.746055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.583 11:10:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:25.583 00:06:25.583 SPDK Configuration: 00:06:25.583 Core mask: 0x1 00:06:25.583 00:06:25.583 Accel Perf Configuration: 00:06:25.583 Workload Type: decompress 00:06:25.583 Transfer size: 111250 bytes 00:06:25.583 Vector count 1 00:06:25.583 Module: software 00:06:25.583 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:25.583 Queue depth: 32 00:06:25.583 Allocate depth: 32 00:06:25.583 # threads/core: 1 00:06:25.583 Run time: 1 seconds 00:06:25.583 Verify: Yes 00:06:25.583 00:06:25.583 Running for 1 seconds... 00:06:25.583 00:06:25.583 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.583 ------------------------------------------------------------------------------------ 00:06:25.583 0,0 5280/s 218 MiB/s 0 0 00:06:25.583 ==================================================================================== 00:06:25.583 Total 5280/s 560 MiB/s 0 0' 00:06:25.583 11:10:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:25.583 11:10:06 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:06 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:25.583 11:10:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.583 11:10:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.583 11:10:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.583 11:10:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.583 11:10:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.583 11:10:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.583 11:10:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.583 11:10:06 -- accel/accel.sh@42 -- # jq -r . 00:06:25.583 [2024-10-13 11:10:06.921859] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:25.583 [2024-10-13 11:10:06.921942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56895 ] 00:06:25.583 [2024-10-13 11:10:07.050785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.583 [2024-10-13 11:10:07.101116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val= 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val= 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val= 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val=0x1 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val= 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val= 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val=decompress 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val= 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val=software 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val=32 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val=32 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val=1 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val=Yes 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val= 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:25.583 11:10:07 -- accel/accel.sh@21 -- # val= 00:06:25.583 11:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:25.583 11:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:26.977 11:10:08 -- accel/accel.sh@21 -- # val= 00:06:26.977 11:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.977 11:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:26.977 11:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:26.977 11:10:08 -- accel/accel.sh@21 -- # val= 00:06:26.977 11:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.977 11:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:26.977 11:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:26.977 11:10:08 -- accel/accel.sh@21 -- # val= 00:06:26.977 11:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.977 11:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:26.977 11:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:26.977 11:10:08 -- accel/accel.sh@21 -- # val= 00:06:26.977 11:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.977 11:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:26.977 11:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:26.977 11:10:08 -- accel/accel.sh@21 -- # val= 00:06:26.977 11:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.977 11:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:26.977 11:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:26.977 11:10:08 -- accel/accel.sh@21 -- # val= 00:06:26.977 11:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.977 11:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:26.977 11:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:26.977 11:10:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.977 11:10:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:26.977 11:10:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.977 00:06:26.977 real 0m2.729s 00:06:26.977 user 0m2.398s 00:06:26.977 sys 0m0.125s 00:06:26.977 11:10:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.977 11:10:08 -- common/autotest_common.sh@10 -- # set +x 00:06:26.977 ************************************ 00:06:26.977 END TEST accel_decmop_full 00:06:26.977 ************************************ 00:06:26.977 11:10:08 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:26.977 11:10:08 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:26.977 11:10:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.977 11:10:08 -- common/autotest_common.sh@10 -- # set +x 00:06:26.977 ************************************ 00:06:26.977 START TEST accel_decomp_mcore 00:06:26.977 ************************************ 00:06:26.977 11:10:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:26.977 11:10:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.977 11:10:08 -- accel/accel.sh@17 -- # local accel_module 00:06:26.977 11:10:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:26.977 11:10:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:26.977 11:10:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.977 11:10:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.977 11:10:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.977 11:10:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.977 11:10:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.977 11:10:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.977 11:10:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.977 11:10:08 -- accel/accel.sh@42 -- # jq -r . 00:06:26.977 [2024-10-13 11:10:08.341654] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:26.977 [2024-10-13 11:10:08.341750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56929 ] 00:06:26.977 [2024-10-13 11:10:08.471189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.977 [2024-10-13 11:10:08.526812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.977 [2024-10-13 11:10:08.526969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.977 [2024-10-13 11:10:08.527266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.977 [2024-10-13 11:10:08.527103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.394 11:10:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:28.394 00:06:28.394 SPDK Configuration: 00:06:28.394 Core mask: 0xf 00:06:28.394 00:06:28.394 Accel Perf Configuration: 00:06:28.394 Workload Type: decompress 00:06:28.394 Transfer size: 4096 bytes 00:06:28.394 Vector count 1 00:06:28.394 Module: software 00:06:28.394 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:28.394 Queue depth: 32 00:06:28.394 Allocate depth: 32 00:06:28.394 # threads/core: 1 00:06:28.394 Run time: 1 seconds 00:06:28.394 Verify: Yes 00:06:28.394 00:06:28.394 Running for 1 seconds... 00:06:28.394 00:06:28.394 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:28.394 ------------------------------------------------------------------------------------ 00:06:28.394 0,0 64576/s 118 MiB/s 0 0 00:06:28.394 3,0 61728/s 113 MiB/s 0 0 00:06:28.395 2,0 61888/s 114 MiB/s 0 0 00:06:28.395 1,0 61824/s 113 MiB/s 0 0 00:06:28.395 ==================================================================================== 00:06:28.395 Total 250016/s 976 MiB/s 0 0' 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:28.395 11:10:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.395 11:10:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.395 11:10:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.395 11:10:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.395 11:10:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.395 11:10:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.395 11:10:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.395 11:10:09 -- accel/accel.sh@42 -- # jq -r . 00:06:28.395 [2024-10-13 11:10:09.722246] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:28.395 [2024-10-13 11:10:09.722394] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56952 ] 00:06:28.395 [2024-10-13 11:10:09.853980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.395 [2024-10-13 11:10:09.906029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.395 [2024-10-13 11:10:09.906162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.395 [2024-10-13 11:10:09.906291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.395 [2024-10-13 11:10:09.906294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val= 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val= 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val= 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val=0xf 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val= 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val= 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val=decompress 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val= 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val=software 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val=32 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val=32 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val=1 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val=Yes 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val= 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:28.395 11:10:09 -- accel/accel.sh@21 -- # val= 00:06:28.395 11:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:28.395 11:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:29.773 11:10:11 -- accel/accel.sh@21 -- # val= 00:06:29.773 11:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:29.773 11:10:11 -- accel/accel.sh@21 -- # val= 00:06:29.773 11:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:29.773 11:10:11 -- accel/accel.sh@21 -- # val= 00:06:29.773 11:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:29.773 11:10:11 -- accel/accel.sh@21 -- # val= 00:06:29.773 11:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:29.773 11:10:11 -- accel/accel.sh@21 -- # val= 00:06:29.773 11:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:29.773 11:10:11 -- accel/accel.sh@21 -- # val= 00:06:29.773 11:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:29.773 11:10:11 -- accel/accel.sh@21 -- # val= 00:06:29.773 11:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:29.773 11:10:11 -- accel/accel.sh@21 -- # val= 00:06:29.773 11:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:29.773 11:10:11 -- accel/accel.sh@21 -- # val= 00:06:29.773 11:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:29.773 11:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:29.773 11:10:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.773 11:10:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:29.773 11:10:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.773 00:06:29.773 real 0m2.754s 00:06:29.773 user 0m8.812s 00:06:29.773 sys 0m0.154s 00:06:29.773 11:10:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.773 11:10:11 -- common/autotest_common.sh@10 -- # set +x 00:06:29.773 ************************************ 00:06:29.773 END TEST accel_decomp_mcore 00:06:29.773 ************************************ 00:06:29.773 11:10:11 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.773 11:10:11 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:29.773 11:10:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.773 11:10:11 -- common/autotest_common.sh@10 -- # set +x 00:06:29.773 ************************************ 00:06:29.773 START TEST accel_decomp_full_mcore 00:06:29.773 ************************************ 00:06:29.773 11:10:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.773 11:10:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.773 11:10:11 -- accel/accel.sh@17 -- # local accel_module 00:06:29.774 11:10:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.774 11:10:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.774 11:10:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.774 11:10:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.774 11:10:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.774 11:10:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.774 11:10:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.774 11:10:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.774 11:10:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.774 11:10:11 -- accel/accel.sh@42 -- # jq -r . 00:06:29.774 [2024-10-13 11:10:11.150950] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:29.774 [2024-10-13 11:10:11.151208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56984 ] 00:06:29.774 [2024-10-13 11:10:11.287957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.774 [2024-10-13 11:10:11.337710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.774 [2024-10-13 11:10:11.337853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.774 [2024-10-13 11:10:11.337980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.774 [2024-10-13 11:10:11.338188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.151 11:10:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:31.151 00:06:31.151 SPDK Configuration: 00:06:31.151 Core mask: 0xf 00:06:31.151 00:06:31.151 Accel Perf Configuration: 00:06:31.151 Workload Type: decompress 00:06:31.151 Transfer size: 111250 bytes 00:06:31.151 Vector count 1 00:06:31.151 Module: software 00:06:31.151 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.151 Queue depth: 32 00:06:31.151 Allocate depth: 32 00:06:31.151 # threads/core: 1 00:06:31.151 Run time: 1 seconds 00:06:31.151 Verify: Yes 00:06:31.151 00:06:31.151 Running for 1 seconds... 00:06:31.151 00:06:31.151 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.151 ------------------------------------------------------------------------------------ 00:06:31.151 0,0 4832/s 199 MiB/s 0 0 00:06:31.151 3,0 4832/s 199 MiB/s 0 0 00:06:31.151 2,0 4800/s 198 MiB/s 0 0 00:06:31.151 1,0 4832/s 199 MiB/s 0 0 00:06:31.151 ==================================================================================== 00:06:31.151 Total 19296/s 2047 MiB/s 0 0' 00:06:31.151 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.151 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.151 11:10:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.151 11:10:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.151 11:10:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.151 11:10:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.151 11:10:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.151 11:10:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.151 11:10:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.151 11:10:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.151 11:10:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.151 11:10:12 -- accel/accel.sh@42 -- # jq -r . 00:06:31.151 [2024-10-13 11:10:12.539252] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:31.151 [2024-10-13 11:10:12.539390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57008 ] 00:06:31.151 [2024-10-13 11:10:12.675467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.151 [2024-10-13 11:10:12.728493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.151 [2024-10-13 11:10:12.728615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.151 [2024-10-13 11:10:12.728739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.151 [2024-10-13 11:10:12.728755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.410 11:10:12 -- accel/accel.sh@21 -- # val= 00:06:31.410 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.410 11:10:12 -- accel/accel.sh@21 -- # val= 00:06:31.410 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.410 11:10:12 -- accel/accel.sh@21 -- # val= 00:06:31.410 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.410 11:10:12 -- accel/accel.sh@21 -- # val=0xf 00:06:31.410 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.410 11:10:12 -- accel/accel.sh@21 -- # val= 00:06:31.410 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.410 11:10:12 -- accel/accel.sh@21 -- # val= 00:06:31.410 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.410 11:10:12 -- accel/accel.sh@21 -- # val=decompress 00:06:31.410 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.410 11:10:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.410 11:10:12 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:31.410 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.410 11:10:12 -- accel/accel.sh@21 -- # val= 00:06:31.410 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.410 11:10:12 -- accel/accel.sh@21 -- # val=software 00:06:31.410 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.410 11:10:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.410 11:10:12 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.410 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.410 11:10:12 -- accel/accel.sh@21 -- # val=32 00:06:31.410 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.410 11:10:12 -- accel/accel.sh@21 -- # val=32 00:06:31.410 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.410 11:10:12 -- accel/accel.sh@21 -- # val=1 00:06:31.410 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.410 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.411 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.411 11:10:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.411 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.411 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.411 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.411 11:10:12 -- accel/accel.sh@21 -- # val=Yes 00:06:31.411 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.411 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.411 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.411 11:10:12 -- accel/accel.sh@21 -- # val= 00:06:31.411 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.411 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.411 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:31.411 11:10:12 -- accel/accel.sh@21 -- # val= 00:06:31.411 11:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.411 11:10:12 -- accel/accel.sh@20 -- # IFS=: 00:06:31.411 11:10:12 -- accel/accel.sh@20 -- # read -r var val 00:06:32.347 11:10:13 -- accel/accel.sh@21 -- # val= 00:06:32.347 11:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # IFS=: 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # read -r var val 00:06:32.347 11:10:13 -- accel/accel.sh@21 -- # val= 00:06:32.347 11:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # IFS=: 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # read -r var val 00:06:32.347 11:10:13 -- accel/accel.sh@21 -- # val= 00:06:32.347 11:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # IFS=: 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # read -r var val 00:06:32.347 11:10:13 -- accel/accel.sh@21 -- # val= 00:06:32.347 11:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # IFS=: 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # read -r var val 00:06:32.347 11:10:13 -- accel/accel.sh@21 -- # val= 00:06:32.347 11:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # IFS=: 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # read -r var val 00:06:32.347 11:10:13 -- accel/accel.sh@21 -- # val= 00:06:32.347 11:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # IFS=: 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # read -r var val 00:06:32.347 11:10:13 -- accel/accel.sh@21 -- # val= 00:06:32.347 11:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # IFS=: 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # read -r var val 00:06:32.347 11:10:13 -- accel/accel.sh@21 -- # val= 00:06:32.347 11:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # IFS=: 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # read -r var val 00:06:32.347 ************************************ 00:06:32.347 END TEST accel_decomp_full_mcore 00:06:32.347 ************************************ 00:06:32.347 11:10:13 -- accel/accel.sh@21 -- # val= 00:06:32.347 11:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # IFS=: 00:06:32.347 11:10:13 -- accel/accel.sh@20 -- # read -r var val 00:06:32.347 11:10:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.347 11:10:13 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:32.347 11:10:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.347 00:06:32.347 real 0m2.790s 00:06:32.347 user 0m8.906s 00:06:32.347 sys 0m0.175s 00:06:32.347 11:10:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.347 11:10:13 -- common/autotest_common.sh@10 -- # set +x 00:06:32.606 11:10:13 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:32.606 11:10:13 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:32.606 11:10:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.606 11:10:13 -- common/autotest_common.sh@10 -- # set +x 00:06:32.606 ************************************ 00:06:32.606 START TEST accel_decomp_mthread 00:06:32.606 ************************************ 00:06:32.606 11:10:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:32.606 11:10:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.606 11:10:13 -- accel/accel.sh@17 -- # local accel_module 00:06:32.606 11:10:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:32.606 11:10:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:32.606 11:10:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.606 11:10:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.606 11:10:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.606 11:10:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.606 11:10:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.606 11:10:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.606 11:10:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.606 11:10:13 -- accel/accel.sh@42 -- # jq -r . 00:06:32.606 [2024-10-13 11:10:13.989547] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:32.606 [2024-10-13 11:10:13.989651] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57046 ] 00:06:32.606 [2024-10-13 11:10:14.126419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.606 [2024-10-13 11:10:14.173400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.992 11:10:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:33.992 00:06:33.992 SPDK Configuration: 00:06:33.992 Core mask: 0x1 00:06:33.992 00:06:33.992 Accel Perf Configuration: 00:06:33.992 Workload Type: decompress 00:06:33.992 Transfer size: 4096 bytes 00:06:33.992 Vector count 1 00:06:33.992 Module: software 00:06:33.992 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:33.992 Queue depth: 32 00:06:33.992 Allocate depth: 32 00:06:33.992 # threads/core: 2 00:06:33.992 Run time: 1 seconds 00:06:33.992 Verify: Yes 00:06:33.992 00:06:33.992 Running for 1 seconds... 00:06:33.992 00:06:33.992 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.992 ------------------------------------------------------------------------------------ 00:06:33.992 0,1 40320/s 74 MiB/s 0 0 00:06:33.992 0,0 40192/s 74 MiB/s 0 0 00:06:33.992 ==================================================================================== 00:06:33.992 Total 80512/s 314 MiB/s 0 0' 00:06:33.992 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.993 11:10:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.993 11:10:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.993 11:10:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.993 11:10:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.993 11:10:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.993 11:10:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.993 11:10:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.993 11:10:15 -- accel/accel.sh@42 -- # jq -r . 00:06:33.993 [2024-10-13 11:10:15.350535] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:33.993 [2024-10-13 11:10:15.350624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57060 ] 00:06:33.993 [2024-10-13 11:10:15.487487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.993 [2024-10-13 11:10:15.534758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val= 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val= 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val= 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val=0x1 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val= 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val= 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val=decompress 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val= 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val=software 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val=32 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val=32 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val=2 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val=Yes 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val= 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:33.993 11:10:15 -- accel/accel.sh@21 -- # val= 00:06:33.993 11:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # IFS=: 00:06:33.993 11:10:15 -- accel/accel.sh@20 -- # read -r var val 00:06:35.371 11:10:16 -- accel/accel.sh@21 -- # val= 00:06:35.371 11:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.371 11:10:16 -- accel/accel.sh@20 -- # IFS=: 00:06:35.371 11:10:16 -- accel/accel.sh@20 -- # read -r var val 00:06:35.371 11:10:16 -- accel/accel.sh@21 -- # val= 00:06:35.371 11:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.371 11:10:16 -- accel/accel.sh@20 -- # IFS=: 00:06:35.371 11:10:16 -- accel/accel.sh@20 -- # read -r var val 00:06:35.371 11:10:16 -- accel/accel.sh@21 -- # val= 00:06:35.371 11:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.371 11:10:16 -- accel/accel.sh@20 -- # IFS=: 00:06:35.371 11:10:16 -- accel/accel.sh@20 -- # read -r var val 00:06:35.371 11:10:16 -- accel/accel.sh@21 -- # val= 00:06:35.371 11:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.371 11:10:16 -- accel/accel.sh@20 -- # IFS=: 00:06:35.371 11:10:16 -- accel/accel.sh@20 -- # read -r var val 00:06:35.371 11:10:16 -- accel/accel.sh@21 -- # val= 00:06:35.371 11:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.371 11:10:16 -- accel/accel.sh@20 -- # IFS=: 00:06:35.371 11:10:16 -- accel/accel.sh@20 -- # read -r var val 00:06:35.371 11:10:16 -- accel/accel.sh@21 -- # val= 00:06:35.371 11:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.371 11:10:16 -- accel/accel.sh@20 -- # IFS=: 00:06:35.371 11:10:16 -- accel/accel.sh@20 -- # read -r var val 00:06:35.371 11:10:16 -- accel/accel.sh@21 -- # val= 00:06:35.371 11:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.371 11:10:16 -- accel/accel.sh@20 -- # IFS=: 00:06:35.371 11:10:16 -- accel/accel.sh@20 -- # read -r var val 00:06:35.371 11:10:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.371 11:10:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:35.371 11:10:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.371 00:06:35.371 real 0m2.736s 00:06:35.371 user 0m2.411s 00:06:35.371 sys 0m0.125s 00:06:35.371 ************************************ 00:06:35.371 END TEST accel_decomp_mthread 00:06:35.371 ************************************ 00:06:35.371 11:10:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.371 11:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:35.371 11:10:16 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:35.371 11:10:16 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:35.371 11:10:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.371 11:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:35.371 ************************************ 00:06:35.371 START TEST accel_deomp_full_mthread 00:06:35.371 ************************************ 00:06:35.371 11:10:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:35.371 11:10:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.371 11:10:16 -- accel/accel.sh@17 -- # local accel_module 00:06:35.371 11:10:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:35.371 11:10:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:35.371 11:10:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.371 11:10:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.371 11:10:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.371 11:10:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.371 11:10:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.371 11:10:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.371 11:10:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.371 11:10:16 -- accel/accel.sh@42 -- # jq -r . 00:06:35.371 [2024-10-13 11:10:16.775103] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:35.371 [2024-10-13 11:10:16.775190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57094 ] 00:06:35.371 [2024-10-13 11:10:16.910725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.371 [2024-10-13 11:10:16.958989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.773 11:10:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:36.773 00:06:36.773 SPDK Configuration: 00:06:36.773 Core mask: 0x1 00:06:36.773 00:06:36.773 Accel Perf Configuration: 00:06:36.773 Workload Type: decompress 00:06:36.773 Transfer size: 111250 bytes 00:06:36.773 Vector count 1 00:06:36.773 Module: software 00:06:36.773 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:36.773 Queue depth: 32 00:06:36.773 Allocate depth: 32 00:06:36.773 # threads/core: 2 00:06:36.773 Run time: 1 seconds 00:06:36.773 Verify: Yes 00:06:36.773 00:06:36.773 Running for 1 seconds... 00:06:36.773 00:06:36.773 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.773 ------------------------------------------------------------------------------------ 00:06:36.773 0,1 2720/s 112 MiB/s 0 0 00:06:36.773 0,0 2720/s 112 MiB/s 0 0 00:06:36.773 ==================================================================================== 00:06:36.773 Total 5440/s 577 MiB/s 0 0' 00:06:36.773 11:10:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:36.773 11:10:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.773 11:10:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.773 11:10:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.773 11:10:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.773 11:10:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.773 11:10:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.773 11:10:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.773 11:10:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.773 11:10:18 -- accel/accel.sh@42 -- # jq -r . 00:06:36.773 [2024-10-13 11:10:18.155107] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:36.773 [2024-10-13 11:10:18.155190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57114 ] 00:06:36.773 [2024-10-13 11:10:18.283007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.773 [2024-10-13 11:10:18.330048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.773 11:10:18 -- accel/accel.sh@21 -- # val= 00:06:36.773 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:36.773 11:10:18 -- accel/accel.sh@21 -- # val= 00:06:36.773 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:36.773 11:10:18 -- accel/accel.sh@21 -- # val= 00:06:36.773 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:36.773 11:10:18 -- accel/accel.sh@21 -- # val=0x1 00:06:36.773 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:36.773 11:10:18 -- accel/accel.sh@21 -- # val= 00:06:36.773 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:36.773 11:10:18 -- accel/accel.sh@21 -- # val= 00:06:36.773 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:36.773 11:10:18 -- accel/accel.sh@21 -- # val=decompress 00:06:36.773 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.773 11:10:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:36.773 11:10:18 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:36.773 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:36.773 11:10:18 -- accel/accel.sh@21 -- # val= 00:06:36.773 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:36.773 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:36.773 11:10:18 -- accel/accel.sh@21 -- # val=software 00:06:37.032 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.032 11:10:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.032 11:10:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:37.032 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.032 11:10:18 -- accel/accel.sh@21 -- # val=32 00:06:37.032 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.032 11:10:18 -- accel/accel.sh@21 -- # val=32 00:06:37.032 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.032 11:10:18 -- accel/accel.sh@21 -- # val=2 00:06:37.032 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.032 11:10:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.032 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.032 11:10:18 -- accel/accel.sh@21 -- # val=Yes 00:06:37.032 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.032 11:10:18 -- accel/accel.sh@21 -- # val= 00:06:37.032 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.032 11:10:18 -- accel/accel.sh@21 -- # val= 00:06:37.032 11:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.032 11:10:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.967 11:10:19 -- accel/accel.sh@21 -- # val= 00:06:37.968 11:10:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.968 11:10:19 -- accel/accel.sh@20 -- # IFS=: 00:06:37.968 11:10:19 -- accel/accel.sh@20 -- # read -r var val 00:06:37.968 11:10:19 -- accel/accel.sh@21 -- # val= 00:06:37.968 11:10:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.968 11:10:19 -- accel/accel.sh@20 -- # IFS=: 00:06:37.968 11:10:19 -- accel/accel.sh@20 -- # read -r var val 00:06:37.968 11:10:19 -- accel/accel.sh@21 -- # val= 00:06:37.968 11:10:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.968 11:10:19 -- accel/accel.sh@20 -- # IFS=: 00:06:37.968 11:10:19 -- accel/accel.sh@20 -- # read -r var val 00:06:37.968 11:10:19 -- accel/accel.sh@21 -- # val= 00:06:37.968 11:10:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.968 11:10:19 -- accel/accel.sh@20 -- # IFS=: 00:06:37.968 11:10:19 -- accel/accel.sh@20 -- # read -r var val 00:06:37.968 11:10:19 -- accel/accel.sh@21 -- # val= 00:06:37.968 11:10:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.968 11:10:19 -- accel/accel.sh@20 -- # IFS=: 00:06:37.968 11:10:19 -- accel/accel.sh@20 -- # read -r var val 00:06:37.968 11:10:19 -- accel/accel.sh@21 -- # val= 00:06:37.968 11:10:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.968 11:10:19 -- accel/accel.sh@20 -- # IFS=: 00:06:37.968 11:10:19 -- accel/accel.sh@20 -- # read -r var val 00:06:37.968 11:10:19 -- accel/accel.sh@21 -- # val= 00:06:37.968 11:10:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.968 11:10:19 -- accel/accel.sh@20 -- # IFS=: 00:06:37.968 11:10:19 -- accel/accel.sh@20 -- # read -r var val 00:06:37.968 11:10:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.968 11:10:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:37.968 11:10:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.968 00:06:37.968 real 0m2.756s 00:06:37.968 user 0m2.430s 00:06:37.968 sys 0m0.126s 00:06:37.968 11:10:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.968 11:10:19 -- common/autotest_common.sh@10 -- # set +x 00:06:37.968 ************************************ 00:06:37.968 END TEST accel_deomp_full_mthread 00:06:37.968 ************************************ 00:06:37.968 11:10:19 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:37.968 11:10:19 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.968 11:10:19 -- accel/accel.sh@129 -- # build_accel_config 00:06:37.968 11:10:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:37.968 11:10:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.968 11:10:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.968 11:10:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.968 11:10:19 -- common/autotest_common.sh@10 -- # set +x 00:06:37.968 11:10:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.968 11:10:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.968 11:10:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.968 11:10:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.968 11:10:19 -- accel/accel.sh@42 -- # jq -r . 00:06:37.968 ************************************ 00:06:37.968 START TEST accel_dif_functional_tests 00:06:37.968 ************************************ 00:06:37.968 11:10:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:38.227 [2024-10-13 11:10:19.613989] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:38.227 [2024-10-13 11:10:19.614266] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57144 ] 00:06:38.227 [2024-10-13 11:10:19.751985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.227 [2024-10-13 11:10:19.801360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.227 [2024-10-13 11:10:19.801472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.227 [2024-10-13 11:10:19.801476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.486 00:06:38.486 00:06:38.486 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.486 http://cunit.sourceforge.net/ 00:06:38.486 00:06:38.486 00:06:38.486 Suite: accel_dif 00:06:38.486 Test: verify: DIF generated, GUARD check ...passed 00:06:38.486 Test: verify: DIF generated, APPTAG check ...passed 00:06:38.486 Test: verify: DIF generated, REFTAG check ...passed 00:06:38.486 Test: verify: DIF not generated, GUARD check ...passed 00:06:38.486 Test: verify: DIF not generated, APPTAG check ...[2024-10-13 11:10:19.852024] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:38.486 [2024-10-13 11:10:19.852094] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:38.486 [2024-10-13 11:10:19.852133] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:38.486 passed 00:06:38.486 Test: verify: DIF not generated, REFTAG check ...[2024-10-13 11:10:19.852234] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:38.486 [2024-10-13 11:10:19.852275] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:38.486 [2024-10-13 11:10:19.852301] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:38.486 passed 00:06:38.486 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:38.487 Test: verify: APPTAG incorrect, APPTAG check ...[2024-10-13 11:10:19.852556] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:38.487 passed 00:06:38.487 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:38.487 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:38.487 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:38.487 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-10-13 11:10:19.852843] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:38.487 passed 00:06:38.487 Test: generate copy: DIF generated, GUARD check ...passed 00:06:38.487 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:38.487 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:38.487 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:38.487 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:38.487 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:38.487 Test: generate copy: iovecs-len validate ...passed 00:06:38.487 Test: generate copy: buffer alignment validate ...passed 00:06:38.487 00:06:38.487 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.487 suites 1 1 n/a 0 0 00:06:38.487 tests 20 20 20 0 0 00:06:38.487 asserts 204 204 204 0 n/a 00:06:38.487 00:06:38.487 Elapsed time = 0.005 seconds 00:06:38.487 [2024-10-13 11:10:19.853402] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:38.487 ************************************ 00:06:38.487 END TEST accel_dif_functional_tests 00:06:38.487 ************************************ 00:06:38.487 00:06:38.487 real 0m0.449s 00:06:38.487 user 0m0.516s 00:06:38.487 sys 0m0.099s 00:06:38.487 11:10:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.487 11:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:38.487 00:06:38.487 real 0m58.677s 00:06:38.487 user 1m4.105s 00:06:38.487 sys 0m4.045s 00:06:38.487 ************************************ 00:06:38.487 END TEST accel 00:06:38.487 ************************************ 00:06:38.487 11:10:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.487 11:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:38.746 11:10:20 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:38.746 11:10:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:38.746 11:10:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.746 11:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:38.746 ************************************ 00:06:38.746 START TEST accel_rpc 00:06:38.746 ************************************ 00:06:38.746 11:10:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:38.746 * Looking for test storage... 00:06:38.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:38.746 11:10:20 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.746 11:10:20 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=57213 00:06:38.746 11:10:20 -- accel/accel_rpc.sh@15 -- # waitforlisten 57213 00:06:38.746 11:10:20 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:38.746 11:10:20 -- common/autotest_common.sh@819 -- # '[' -z 57213 ']' 00:06:38.746 11:10:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.746 11:10:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.746 11:10:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.746 11:10:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.746 11:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:38.746 [2024-10-13 11:10:20.230272] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:38.746 [2024-10-13 11:10:20.230632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57213 ] 00:06:39.005 [2024-10-13 11:10:20.362261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.005 [2024-10-13 11:10:20.413589] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.005 [2024-10-13 11:10:20.413764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.005 11:10:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.005 11:10:20 -- common/autotest_common.sh@852 -- # return 0 00:06:39.005 11:10:20 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:39.005 11:10:20 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:39.005 11:10:20 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:39.005 11:10:20 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:39.005 11:10:20 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:39.005 11:10:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.005 11:10:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.005 11:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:39.005 ************************************ 00:06:39.005 START TEST accel_assign_opcode 00:06:39.005 ************************************ 00:06:39.005 11:10:20 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:39.005 11:10:20 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:39.005 11:10:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.005 11:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:39.005 [2024-10-13 11:10:20.490090] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:39.005 11:10:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:39.005 11:10:20 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:39.005 11:10:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.005 11:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:39.005 [2024-10-13 11:10:20.498087] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:39.005 11:10:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:39.005 11:10:20 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:39.005 11:10:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.005 11:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:39.264 11:10:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:39.265 11:10:20 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:39.265 11:10:20 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:39.265 11:10:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.265 11:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:39.265 11:10:20 -- accel/accel_rpc.sh@42 -- # grep software 00:06:39.265 11:10:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:39.265 software 00:06:39.265 ************************************ 00:06:39.265 END TEST accel_assign_opcode 00:06:39.265 ************************************ 00:06:39.265 00:06:39.265 real 0m0.189s 00:06:39.265 user 0m0.057s 00:06:39.265 sys 0m0.010s 00:06:39.265 11:10:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.265 11:10:20 -- common/autotest_common.sh@10 -- # set +x 00:06:39.265 11:10:20 -- accel/accel_rpc.sh@55 -- # killprocess 57213 00:06:39.265 11:10:20 -- common/autotest_common.sh@926 -- # '[' -z 57213 ']' 00:06:39.265 11:10:20 -- common/autotest_common.sh@930 -- # kill -0 57213 00:06:39.265 11:10:20 -- common/autotest_common.sh@931 -- # uname 00:06:39.265 11:10:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.265 11:10:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57213 00:06:39.265 killing process with pid 57213 00:06:39.265 11:10:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:39.265 11:10:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:39.265 11:10:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57213' 00:06:39.265 11:10:20 -- common/autotest_common.sh@945 -- # kill 57213 00:06:39.265 11:10:20 -- common/autotest_common.sh@950 -- # wait 57213 00:06:39.524 ************************************ 00:06:39.524 END TEST accel_rpc 00:06:39.524 ************************************ 00:06:39.524 00:06:39.524 real 0m0.910s 00:06:39.524 user 0m0.947s 00:06:39.524 sys 0m0.289s 00:06:39.524 11:10:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.524 11:10:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.524 11:10:21 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:39.524 11:10:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.524 11:10:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.524 11:10:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.524 ************************************ 00:06:39.524 START TEST app_cmdline 00:06:39.524 ************************************ 00:06:39.524 11:10:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:39.783 * Looking for test storage... 00:06:39.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:39.783 11:10:21 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:39.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.783 11:10:21 -- app/cmdline.sh@17 -- # spdk_tgt_pid=57292 00:06:39.783 11:10:21 -- app/cmdline.sh@18 -- # waitforlisten 57292 00:06:39.783 11:10:21 -- common/autotest_common.sh@819 -- # '[' -z 57292 ']' 00:06:39.783 11:10:21 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:39.783 11:10:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.783 11:10:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.783 11:10:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.783 11:10:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.783 11:10:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.783 [2024-10-13 11:10:21.200302] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:39.783 [2024-10-13 11:10:21.200419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57292 ] 00:06:39.783 [2024-10-13 11:10:21.339221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.078 [2024-10-13 11:10:21.391551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:40.078 [2024-10-13 11:10:21.391719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.648 11:10:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.648 11:10:22 -- common/autotest_common.sh@852 -- # return 0 00:06:40.648 11:10:22 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:40.907 { 00:06:40.907 "version": "SPDK v24.01.1-pre git sha1 726a04d70", 00:06:40.907 "fields": { 00:06:40.907 "major": 24, 00:06:40.907 "minor": 1, 00:06:40.907 "patch": 1, 00:06:40.907 "suffix": "-pre", 00:06:40.907 "commit": "726a04d70" 00:06:40.907 } 00:06:40.907 } 00:06:40.907 11:10:22 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:40.907 11:10:22 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:40.907 11:10:22 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:40.907 11:10:22 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:40.907 11:10:22 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:40.907 11:10:22 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:40.907 11:10:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.907 11:10:22 -- app/cmdline.sh@26 -- # sort 00:06:40.907 11:10:22 -- common/autotest_common.sh@10 -- # set +x 00:06:40.907 11:10:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.167 11:10:22 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:41.167 11:10:22 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:41.167 11:10:22 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.167 11:10:22 -- common/autotest_common.sh@640 -- # local es=0 00:06:41.167 11:10:22 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.167 11:10:22 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.167 11:10:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.167 11:10:22 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.167 11:10:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.167 11:10:22 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.167 11:10:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.167 11:10:22 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.167 11:10:22 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:41.167 11:10:22 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.167 request: 00:06:41.167 { 00:06:41.167 "method": "env_dpdk_get_mem_stats", 00:06:41.167 "req_id": 1 00:06:41.167 } 00:06:41.167 Got JSON-RPC error response 00:06:41.167 response: 00:06:41.167 { 00:06:41.167 "code": -32601, 00:06:41.167 "message": "Method not found" 00:06:41.167 } 00:06:41.167 11:10:22 -- common/autotest_common.sh@643 -- # es=1 00:06:41.167 11:10:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:41.167 11:10:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:41.167 11:10:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:41.167 11:10:22 -- app/cmdline.sh@1 -- # killprocess 57292 00:06:41.167 11:10:22 -- common/autotest_common.sh@926 -- # '[' -z 57292 ']' 00:06:41.167 11:10:22 -- common/autotest_common.sh@930 -- # kill -0 57292 00:06:41.167 11:10:22 -- common/autotest_common.sh@931 -- # uname 00:06:41.167 11:10:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.167 11:10:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57292 00:06:41.426 killing process with pid 57292 00:06:41.426 11:10:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.426 11:10:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.426 11:10:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57292' 00:06:41.426 11:10:22 -- common/autotest_common.sh@945 -- # kill 57292 00:06:41.426 11:10:22 -- common/autotest_common.sh@950 -- # wait 57292 00:06:41.685 00:06:41.686 real 0m1.987s 00:06:41.686 user 0m2.665s 00:06:41.686 sys 0m0.336s 00:06:41.686 ************************************ 00:06:41.686 END TEST app_cmdline 00:06:41.686 ************************************ 00:06:41.686 11:10:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.686 11:10:23 -- common/autotest_common.sh@10 -- # set +x 00:06:41.686 11:10:23 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:41.686 11:10:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.686 11:10:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.686 11:10:23 -- common/autotest_common.sh@10 -- # set +x 00:06:41.686 ************************************ 00:06:41.686 START TEST version 00:06:41.686 ************************************ 00:06:41.686 11:10:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:41.686 * Looking for test storage... 00:06:41.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:41.686 11:10:23 -- app/version.sh@17 -- # get_header_version major 00:06:41.686 11:10:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.686 11:10:23 -- app/version.sh@14 -- # cut -f2 00:06:41.686 11:10:23 -- app/version.sh@14 -- # tr -d '"' 00:06:41.686 11:10:23 -- app/version.sh@17 -- # major=24 00:06:41.686 11:10:23 -- app/version.sh@18 -- # get_header_version minor 00:06:41.686 11:10:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.686 11:10:23 -- app/version.sh@14 -- # cut -f2 00:06:41.686 11:10:23 -- app/version.sh@14 -- # tr -d '"' 00:06:41.686 11:10:23 -- app/version.sh@18 -- # minor=1 00:06:41.686 11:10:23 -- app/version.sh@19 -- # get_header_version patch 00:06:41.686 11:10:23 -- app/version.sh@14 -- # cut -f2 00:06:41.686 11:10:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.686 11:10:23 -- app/version.sh@14 -- # tr -d '"' 00:06:41.686 11:10:23 -- app/version.sh@19 -- # patch=1 00:06:41.686 11:10:23 -- app/version.sh@20 -- # get_header_version suffix 00:06:41.686 11:10:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.686 11:10:23 -- app/version.sh@14 -- # cut -f2 00:06:41.686 11:10:23 -- app/version.sh@14 -- # tr -d '"' 00:06:41.686 11:10:23 -- app/version.sh@20 -- # suffix=-pre 00:06:41.686 11:10:23 -- app/version.sh@22 -- # version=24.1 00:06:41.686 11:10:23 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:41.686 11:10:23 -- app/version.sh@25 -- # version=24.1.1 00:06:41.686 11:10:23 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:41.686 11:10:23 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:41.686 11:10:23 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:41.686 11:10:23 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:41.686 11:10:23 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:41.686 00:06:41.686 real 0m0.144s 00:06:41.686 user 0m0.086s 00:06:41.686 sys 0m0.090s 00:06:41.686 ************************************ 00:06:41.686 END TEST version 00:06:41.686 ************************************ 00:06:41.686 11:10:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.686 11:10:23 -- common/autotest_common.sh@10 -- # set +x 00:06:41.686 11:10:23 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:41.686 11:10:23 -- spdk/autotest.sh@204 -- # uname -s 00:06:41.945 11:10:23 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:41.945 11:10:23 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:41.945 11:10:23 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:06:41.946 11:10:23 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:06:41.946 11:10:23 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:41.946 11:10:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.946 11:10:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.946 11:10:23 -- common/autotest_common.sh@10 -- # set +x 00:06:41.946 ************************************ 00:06:41.946 START TEST spdk_dd 00:06:41.946 ************************************ 00:06:41.946 11:10:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:41.946 * Looking for test storage... 00:06:41.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:41.946 11:10:23 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:41.946 11:10:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.946 11:10:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.946 11:10:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.946 11:10:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.946 11:10:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.946 11:10:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.946 11:10:23 -- paths/export.sh@5 -- # export PATH 00:06:41.946 11:10:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.946 11:10:23 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:42.205 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:42.205 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:42.205 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:42.205 11:10:23 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:42.205 11:10:23 -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:42.205 11:10:23 -- scripts/common.sh@311 -- # local bdf bdfs 00:06:42.205 11:10:23 -- scripts/common.sh@312 -- # local nvmes 00:06:42.205 11:10:23 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:06:42.205 11:10:23 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:42.205 11:10:23 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:06:42.205 11:10:23 -- scripts/common.sh@297 -- # local bdf= 00:06:42.205 11:10:23 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:06:42.205 11:10:23 -- scripts/common.sh@232 -- # local class 00:06:42.205 11:10:23 -- scripts/common.sh@233 -- # local subclass 00:06:42.205 11:10:23 -- scripts/common.sh@234 -- # local progif 00:06:42.205 11:10:23 -- scripts/common.sh@235 -- # printf %02x 1 00:06:42.205 11:10:23 -- scripts/common.sh@235 -- # class=01 00:06:42.205 11:10:23 -- scripts/common.sh@236 -- # printf %02x 8 00:06:42.205 11:10:23 -- scripts/common.sh@236 -- # subclass=08 00:06:42.205 11:10:23 -- scripts/common.sh@237 -- # printf %02x 2 00:06:42.205 11:10:23 -- scripts/common.sh@237 -- # progif=02 00:06:42.205 11:10:23 -- scripts/common.sh@239 -- # hash lspci 00:06:42.205 11:10:23 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:06:42.205 11:10:23 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:06:42.205 11:10:23 -- scripts/common.sh@242 -- # grep -i -- -p02 00:06:42.205 11:10:23 -- scripts/common.sh@244 -- # tr -d '"' 00:06:42.205 11:10:23 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:42.205 11:10:23 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:42.205 11:10:23 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:06:42.205 11:10:23 -- scripts/common.sh@15 -- # local i 00:06:42.205 11:10:23 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:06:42.205 11:10:23 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:42.205 11:10:23 -- scripts/common.sh@24 -- # return 0 00:06:42.205 11:10:23 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:06:42.205 11:10:23 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:42.205 11:10:23 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:06:42.205 11:10:23 -- scripts/common.sh@15 -- # local i 00:06:42.205 11:10:23 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:06:42.205 11:10:23 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:42.205 11:10:23 -- scripts/common.sh@24 -- # return 0 00:06:42.205 11:10:23 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:06:42.205 11:10:23 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:06:42.205 11:10:23 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:06:42.205 11:10:23 -- scripts/common.sh@322 -- # uname -s 00:06:42.205 11:10:23 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:06:42.205 11:10:23 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:06:42.205 11:10:23 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:06:42.205 11:10:23 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:06:42.205 11:10:23 -- scripts/common.sh@322 -- # uname -s 00:06:42.465 11:10:23 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:06:42.465 11:10:23 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:06:42.465 11:10:23 -- scripts/common.sh@327 -- # (( 2 )) 00:06:42.465 11:10:23 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:06:42.465 11:10:23 -- dd/dd.sh@13 -- # check_liburing 00:06:42.465 11:10:23 -- dd/common.sh@139 -- # local lib so 00:06:42.465 11:10:23 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:42.465 11:10:23 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.2.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_scsi.so.8.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.2.0 == liburing.so.* ]] 00:06:42.465 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.465 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:42.466 11:10:23 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:42.466 11:10:23 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:42.466 * spdk_dd linked to liburing 00:06:42.466 11:10:23 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:42.466 11:10:23 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:42.466 11:10:23 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:42.466 11:10:23 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:42.466 11:10:23 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:42.466 11:10:23 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:42.466 11:10:23 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:42.466 11:10:23 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:42.466 11:10:23 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:42.466 11:10:23 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:42.466 11:10:23 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:42.466 11:10:23 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:42.466 11:10:23 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:42.466 11:10:23 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:42.466 11:10:23 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:42.466 11:10:23 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:42.466 11:10:23 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:42.466 11:10:23 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:42.466 11:10:23 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:42.466 11:10:23 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:42.466 11:10:23 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:42.466 11:10:23 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:42.466 11:10:23 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:42.466 11:10:23 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:42.466 11:10:23 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:42.466 11:10:23 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:42.466 11:10:23 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:42.466 11:10:23 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:42.466 11:10:23 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:42.466 11:10:23 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:42.466 11:10:23 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:42.466 11:10:23 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:42.466 11:10:23 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:42.466 11:10:23 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:42.466 11:10:23 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:42.466 11:10:23 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:42.466 11:10:23 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:42.466 11:10:23 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:42.466 11:10:23 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:42.466 11:10:23 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:42.466 11:10:23 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:42.466 11:10:23 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:42.466 11:10:23 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:42.466 11:10:23 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:42.466 11:10:23 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:42.466 11:10:23 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:42.466 11:10:23 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:42.466 11:10:23 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:42.466 11:10:23 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:42.466 11:10:23 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:42.466 11:10:23 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:42.466 11:10:23 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:42.466 11:10:23 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:42.466 11:10:23 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:42.466 11:10:23 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:06:42.466 11:10:23 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:42.466 11:10:23 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:42.466 11:10:23 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:42.466 11:10:23 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:42.466 11:10:23 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:06:42.466 11:10:23 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:42.466 11:10:23 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:06:42.466 11:10:23 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:42.467 11:10:23 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:42.467 11:10:23 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:42.467 11:10:23 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:42.467 11:10:23 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:42.467 11:10:23 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:42.467 11:10:23 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:42.467 11:10:23 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:06:42.467 11:10:23 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:42.467 11:10:23 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:42.467 11:10:23 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:42.467 11:10:23 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:42.467 11:10:23 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:42.467 11:10:23 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:42.467 11:10:23 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:42.467 11:10:23 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:42.467 11:10:23 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:42.467 11:10:23 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:42.467 11:10:23 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:06:42.467 11:10:23 -- dd/common.sh@149 -- # [[ y != y ]] 00:06:42.467 11:10:23 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:42.467 11:10:23 -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:42.467 11:10:23 -- dd/common.sh@156 -- # liburing_in_use=1 00:06:42.467 11:10:23 -- dd/common.sh@157 -- # return 0 00:06:42.467 11:10:23 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:42.467 11:10:23 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:06:42.467 11:10:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:42.467 11:10:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.467 11:10:23 -- common/autotest_common.sh@10 -- # set +x 00:06:42.467 ************************************ 00:06:42.467 START TEST spdk_dd_basic_rw 00:06:42.467 ************************************ 00:06:42.467 11:10:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:06:42.467 * Looking for test storage... 00:06:42.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:42.467 11:10:23 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:42.467 11:10:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.467 11:10:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.467 11:10:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.467 11:10:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.467 11:10:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.467 11:10:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.467 11:10:23 -- paths/export.sh@5 -- # export PATH 00:06:42.467 11:10:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.467 11:10:23 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:42.467 11:10:23 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:42.467 11:10:23 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:42.467 11:10:23 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:06:42.467 11:10:23 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:42.467 11:10:23 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:06:42.467 11:10:23 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:42.467 11:10:23 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:42.467 11:10:23 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:42.467 11:10:23 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:06:42.467 11:10:23 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:06:42.467 11:10:23 -- dd/common.sh@126 -- # mapfile -t id 00:06:42.467 11:10:23 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:06:42.728 11:10:24 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2197 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:42.728 11:10:24 -- dd/common.sh@130 -- # lbaf=04 00:06:42.729 11:10:24 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2197 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:42.729 11:10:24 -- dd/common.sh@132 -- # lbaf=4096 00:06:42.729 11:10:24 -- dd/common.sh@134 -- # echo 4096 00:06:42.729 11:10:24 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:42.729 11:10:24 -- dd/basic_rw.sh@96 -- # : 00:06:42.729 11:10:24 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:42.729 11:10:24 -- dd/basic_rw.sh@96 -- # gen_conf 00:06:42.729 11:10:24 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:42.729 11:10:24 -- dd/common.sh@31 -- # xtrace_disable 00:06:42.729 11:10:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.729 11:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:42.729 11:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:42.729 ************************************ 00:06:42.729 START TEST dd_bs_lt_native_bs 00:06:42.729 ************************************ 00:06:42.729 11:10:24 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:42.729 11:10:24 -- common/autotest_common.sh@640 -- # local es=0 00:06:42.729 11:10:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:42.729 11:10:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.729 11:10:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:42.729 11:10:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.729 11:10:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:42.729 11:10:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.729 11:10:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:42.729 11:10:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.729 11:10:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:42.729 11:10:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:42.729 { 00:06:42.729 "subsystems": [ 00:06:42.729 { 00:06:42.729 "subsystem": "bdev", 00:06:42.729 "config": [ 00:06:42.729 { 00:06:42.729 "params": { 00:06:42.729 "trtype": "pcie", 00:06:42.729 "traddr": "0000:00:06.0", 00:06:42.729 "name": "Nvme0" 00:06:42.729 }, 00:06:42.729 "method": "bdev_nvme_attach_controller" 00:06:42.729 }, 00:06:42.729 { 00:06:42.729 "method": "bdev_wait_for_examine" 00:06:42.729 } 00:06:42.729 ] 00:06:42.729 } 00:06:42.729 ] 00:06:42.729 } 00:06:42.729 [2024-10-13 11:10:24.204982] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:42.729 [2024-10-13 11:10:24.205073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57614 ] 00:06:42.988 [2024-10-13 11:10:24.345437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.988 [2024-10-13 11:10:24.413461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.988 [2024-10-13 11:10:24.534815] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:42.988 [2024-10-13 11:10:24.534894] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.247 [2024-10-13 11:10:24.609295] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:43.247 11:10:24 -- common/autotest_common.sh@643 -- # es=234 00:06:43.247 11:10:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:43.247 11:10:24 -- common/autotest_common.sh@652 -- # es=106 00:06:43.247 11:10:24 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:43.247 11:10:24 -- common/autotest_common.sh@660 -- # es=1 00:06:43.247 11:10:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:43.247 00:06:43.247 real 0m0.560s 00:06:43.247 user 0m0.402s 00:06:43.247 sys 0m0.112s 00:06:43.247 11:10:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.247 11:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:43.247 ************************************ 00:06:43.247 END TEST dd_bs_lt_native_bs 00:06:43.247 ************************************ 00:06:43.247 11:10:24 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:43.247 11:10:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:43.247 11:10:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.247 11:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:43.247 ************************************ 00:06:43.247 START TEST dd_rw 00:06:43.247 ************************************ 00:06:43.247 11:10:24 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:06:43.247 11:10:24 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:43.247 11:10:24 -- dd/basic_rw.sh@12 -- # local count size 00:06:43.247 11:10:24 -- dd/basic_rw.sh@13 -- # local qds bss 00:06:43.247 11:10:24 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:43.247 11:10:24 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:43.247 11:10:24 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:43.247 11:10:24 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:43.247 11:10:24 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:43.247 11:10:24 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:43.247 11:10:24 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:43.247 11:10:24 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:43.247 11:10:24 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:43.247 11:10:24 -- dd/basic_rw.sh@23 -- # count=15 00:06:43.247 11:10:24 -- dd/basic_rw.sh@24 -- # count=15 00:06:43.247 11:10:24 -- dd/basic_rw.sh@25 -- # size=61440 00:06:43.247 11:10:24 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:43.247 11:10:24 -- dd/common.sh@98 -- # xtrace_disable 00:06:43.247 11:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:43.815 11:10:25 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:43.815 11:10:25 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:43.815 11:10:25 -- dd/common.sh@31 -- # xtrace_disable 00:06:43.815 11:10:25 -- common/autotest_common.sh@10 -- # set +x 00:06:43.815 [2024-10-13 11:10:25.399638] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:43.815 [2024-10-13 11:10:25.399758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57645 ] 00:06:43.815 { 00:06:43.815 "subsystems": [ 00:06:43.815 { 00:06:43.815 "subsystem": "bdev", 00:06:43.815 "config": [ 00:06:43.815 { 00:06:43.815 "params": { 00:06:43.815 "trtype": "pcie", 00:06:43.815 "traddr": "0000:00:06.0", 00:06:43.815 "name": "Nvme0" 00:06:43.815 }, 00:06:43.815 "method": "bdev_nvme_attach_controller" 00:06:43.815 }, 00:06:43.815 { 00:06:43.815 "method": "bdev_wait_for_examine" 00:06:43.815 } 00:06:43.815 ] 00:06:43.815 } 00:06:43.815 ] 00:06:43.815 } 00:06:44.074 [2024-10-13 11:10:25.536880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.074 [2024-10-13 11:10:25.584926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.333  [2024-10-13T11:10:25.935Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:44.333 00:06:44.333 11:10:25 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:44.333 11:10:25 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:44.333 11:10:25 -- dd/common.sh@31 -- # xtrace_disable 00:06:44.333 11:10:25 -- common/autotest_common.sh@10 -- # set +x 00:06:44.333 [2024-10-13 11:10:25.922690] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:44.333 [2024-10-13 11:10:25.923262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57652 ] 00:06:44.333 { 00:06:44.333 "subsystems": [ 00:06:44.333 { 00:06:44.333 "subsystem": "bdev", 00:06:44.333 "config": [ 00:06:44.333 { 00:06:44.333 "params": { 00:06:44.333 "trtype": "pcie", 00:06:44.333 "traddr": "0000:00:06.0", 00:06:44.333 "name": "Nvme0" 00:06:44.333 }, 00:06:44.333 "method": "bdev_nvme_attach_controller" 00:06:44.333 }, 00:06:44.333 { 00:06:44.333 "method": "bdev_wait_for_examine" 00:06:44.333 } 00:06:44.333 ] 00:06:44.333 } 00:06:44.333 ] 00:06:44.333 } 00:06:44.592 [2024-10-13 11:10:26.059699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.592 [2024-10-13 11:10:26.107454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.851  [2024-10-13T11:10:26.453Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:44.851 00:06:44.852 11:10:26 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.852 11:10:26 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:44.852 11:10:26 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:44.852 11:10:26 -- dd/common.sh@11 -- # local nvme_ref= 00:06:44.852 11:10:26 -- dd/common.sh@12 -- # local size=61440 00:06:44.852 11:10:26 -- dd/common.sh@14 -- # local bs=1048576 00:06:44.852 11:10:26 -- dd/common.sh@15 -- # local count=1 00:06:44.852 11:10:26 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:44.852 11:10:26 -- dd/common.sh@18 -- # gen_conf 00:06:44.852 11:10:26 -- dd/common.sh@31 -- # xtrace_disable 00:06:44.852 11:10:26 -- common/autotest_common.sh@10 -- # set +x 00:06:44.852 { 00:06:44.852 "subsystems": [ 00:06:44.852 { 00:06:44.852 "subsystem": "bdev", 00:06:44.852 "config": [ 00:06:44.852 { 00:06:44.852 "params": { 00:06:44.852 "trtype": "pcie", 00:06:44.852 "traddr": "0000:00:06.0", 00:06:44.852 "name": "Nvme0" 00:06:44.852 }, 00:06:44.852 "method": "bdev_nvme_attach_controller" 00:06:44.852 }, 00:06:44.852 { 00:06:44.852 "method": "bdev_wait_for_examine" 00:06:44.852 } 00:06:44.852 ] 00:06:44.852 } 00:06:44.852 ] 00:06:44.852 } 00:06:44.852 [2024-10-13 11:10:26.448980] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:44.852 [2024-10-13 11:10:26.449071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57671 ] 00:06:45.111 [2024-10-13 11:10:26.577325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.111 [2024-10-13 11:10:26.625135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.370  [2024-10-13T11:10:26.972Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:45.370 00:06:45.370 11:10:26 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:45.370 11:10:26 -- dd/basic_rw.sh@23 -- # count=15 00:06:45.370 11:10:26 -- dd/basic_rw.sh@24 -- # count=15 00:06:45.370 11:10:26 -- dd/basic_rw.sh@25 -- # size=61440 00:06:45.370 11:10:26 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:45.370 11:10:26 -- dd/common.sh@98 -- # xtrace_disable 00:06:45.370 11:10:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.937 11:10:27 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:45.937 11:10:27 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:45.937 11:10:27 -- dd/common.sh@31 -- # xtrace_disable 00:06:45.937 11:10:27 -- common/autotest_common.sh@10 -- # set +x 00:06:45.937 { 00:06:45.937 "subsystems": [ 00:06:45.937 { 00:06:45.937 "subsystem": "bdev", 00:06:45.937 "config": [ 00:06:45.937 { 00:06:45.937 "params": { 00:06:45.937 "trtype": "pcie", 00:06:45.937 "traddr": "0000:00:06.0", 00:06:45.937 "name": "Nvme0" 00:06:45.937 }, 00:06:45.937 "method": "bdev_nvme_attach_controller" 00:06:45.937 }, 00:06:45.937 { 00:06:45.937 "method": "bdev_wait_for_examine" 00:06:45.937 } 00:06:45.937 ] 00:06:45.937 } 00:06:45.937 ] 00:06:45.937 } 00:06:45.937 [2024-10-13 11:10:27.495279] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:45.937 [2024-10-13 11:10:27.495420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57689 ] 00:06:46.196 [2024-10-13 11:10:27.629750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.196 [2024-10-13 11:10:27.677413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.196  [2024-10-13T11:10:28.058Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:46.456 00:06:46.456 11:10:27 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:46.456 11:10:27 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:46.456 11:10:27 -- dd/common.sh@31 -- # xtrace_disable 00:06:46.456 11:10:27 -- common/autotest_common.sh@10 -- # set +x 00:06:46.456 [2024-10-13 11:10:28.019398] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:46.456 [2024-10-13 11:10:28.019497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57696 ] 00:06:46.456 { 00:06:46.456 "subsystems": [ 00:06:46.456 { 00:06:46.456 "subsystem": "bdev", 00:06:46.456 "config": [ 00:06:46.456 { 00:06:46.456 "params": { 00:06:46.456 "trtype": "pcie", 00:06:46.456 "traddr": "0000:00:06.0", 00:06:46.456 "name": "Nvme0" 00:06:46.456 }, 00:06:46.456 "method": "bdev_nvme_attach_controller" 00:06:46.456 }, 00:06:46.456 { 00:06:46.456 "method": "bdev_wait_for_examine" 00:06:46.456 } 00:06:46.456 ] 00:06:46.456 } 00:06:46.456 ] 00:06:46.456 } 00:06:46.715 [2024-10-13 11:10:28.147261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.715 [2024-10-13 11:10:28.194438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.715  [2024-10-13T11:10:28.576Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:46.974 00:06:46.974 11:10:28 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.974 11:10:28 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:46.974 11:10:28 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:46.974 11:10:28 -- dd/common.sh@11 -- # local nvme_ref= 00:06:46.974 11:10:28 -- dd/common.sh@12 -- # local size=61440 00:06:46.974 11:10:28 -- dd/common.sh@14 -- # local bs=1048576 00:06:46.974 11:10:28 -- dd/common.sh@15 -- # local count=1 00:06:46.974 11:10:28 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:46.974 11:10:28 -- dd/common.sh@18 -- # gen_conf 00:06:46.974 11:10:28 -- dd/common.sh@31 -- # xtrace_disable 00:06:46.974 11:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:46.974 [2024-10-13 11:10:28.517406] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:46.974 [2024-10-13 11:10:28.517490] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57715 ] 00:06:46.974 { 00:06:46.974 "subsystems": [ 00:06:46.974 { 00:06:46.974 "subsystem": "bdev", 00:06:46.974 "config": [ 00:06:46.974 { 00:06:46.974 "params": { 00:06:46.974 "trtype": "pcie", 00:06:46.974 "traddr": "0000:00:06.0", 00:06:46.974 "name": "Nvme0" 00:06:46.974 }, 00:06:46.974 "method": "bdev_nvme_attach_controller" 00:06:46.974 }, 00:06:46.974 { 00:06:46.974 "method": "bdev_wait_for_examine" 00:06:46.974 } 00:06:46.974 ] 00:06:46.974 } 00:06:46.974 ] 00:06:46.974 } 00:06:47.233 [2024-10-13 11:10:28.647154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.233 [2024-10-13 11:10:28.697700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.233  [2024-10-13T11:10:29.094Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:47.492 00:06:47.492 11:10:28 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:47.492 11:10:28 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:47.492 11:10:28 -- dd/basic_rw.sh@23 -- # count=7 00:06:47.492 11:10:28 -- dd/basic_rw.sh@24 -- # count=7 00:06:47.492 11:10:28 -- dd/basic_rw.sh@25 -- # size=57344 00:06:47.492 11:10:28 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:47.492 11:10:28 -- dd/common.sh@98 -- # xtrace_disable 00:06:47.492 11:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:48.060 11:10:29 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:48.060 11:10:29 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:48.060 11:10:29 -- dd/common.sh@31 -- # xtrace_disable 00:06:48.060 11:10:29 -- common/autotest_common.sh@10 -- # set +x 00:06:48.060 { 00:06:48.060 "subsystems": [ 00:06:48.060 { 00:06:48.060 "subsystem": "bdev", 00:06:48.060 "config": [ 00:06:48.060 { 00:06:48.060 "params": { 00:06:48.060 "trtype": "pcie", 00:06:48.060 "traddr": "0000:00:06.0", 00:06:48.060 "name": "Nvme0" 00:06:48.060 }, 00:06:48.060 "method": "bdev_nvme_attach_controller" 00:06:48.060 }, 00:06:48.060 { 00:06:48.060 "method": "bdev_wait_for_examine" 00:06:48.060 } 00:06:48.060 ] 00:06:48.060 } 00:06:48.060 ] 00:06:48.060 } 00:06:48.060 [2024-10-13 11:10:29.539257] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:48.060 [2024-10-13 11:10:29.539402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57733 ] 00:06:48.319 [2024-10-13 11:10:29.673031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.319 [2024-10-13 11:10:29.727138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.319  [2024-10-13T11:10:30.180Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:48.578 00:06:48.578 11:10:30 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:48.578 11:10:30 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:48.578 11:10:30 -- dd/common.sh@31 -- # xtrace_disable 00:06:48.578 11:10:30 -- common/autotest_common.sh@10 -- # set +x 00:06:48.578 [2024-10-13 11:10:30.066300] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:48.578 [2024-10-13 11:10:30.066405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57740 ] 00:06:48.578 { 00:06:48.578 "subsystems": [ 00:06:48.578 { 00:06:48.578 "subsystem": "bdev", 00:06:48.578 "config": [ 00:06:48.578 { 00:06:48.578 "params": { 00:06:48.578 "trtype": "pcie", 00:06:48.578 "traddr": "0000:00:06.0", 00:06:48.578 "name": "Nvme0" 00:06:48.578 }, 00:06:48.578 "method": "bdev_nvme_attach_controller" 00:06:48.578 }, 00:06:48.578 { 00:06:48.578 "method": "bdev_wait_for_examine" 00:06:48.578 } 00:06:48.578 ] 00:06:48.578 } 00:06:48.578 ] 00:06:48.578 } 00:06:48.837 [2024-10-13 11:10:30.198684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.837 [2024-10-13 11:10:30.245951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.837  [2024-10-13T11:10:30.698Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:49.096 00:06:49.096 11:10:30 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.096 11:10:30 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:49.096 11:10:30 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:49.096 11:10:30 -- dd/common.sh@11 -- # local nvme_ref= 00:06:49.096 11:10:30 -- dd/common.sh@12 -- # local size=57344 00:06:49.096 11:10:30 -- dd/common.sh@14 -- # local bs=1048576 00:06:49.096 11:10:30 -- dd/common.sh@15 -- # local count=1 00:06:49.096 11:10:30 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:49.096 11:10:30 -- dd/common.sh@18 -- # gen_conf 00:06:49.096 11:10:30 -- dd/common.sh@31 -- # xtrace_disable 00:06:49.096 11:10:30 -- common/autotest_common.sh@10 -- # set +x 00:06:49.096 [2024-10-13 11:10:30.584105] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:49.096 [2024-10-13 11:10:30.584194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57759 ] 00:06:49.096 { 00:06:49.096 "subsystems": [ 00:06:49.096 { 00:06:49.096 "subsystem": "bdev", 00:06:49.096 "config": [ 00:06:49.096 { 00:06:49.096 "params": { 00:06:49.096 "trtype": "pcie", 00:06:49.096 "traddr": "0000:00:06.0", 00:06:49.096 "name": "Nvme0" 00:06:49.096 }, 00:06:49.096 "method": "bdev_nvme_attach_controller" 00:06:49.096 }, 00:06:49.096 { 00:06:49.096 "method": "bdev_wait_for_examine" 00:06:49.096 } 00:06:49.096 ] 00:06:49.096 } 00:06:49.096 ] 00:06:49.096 } 00:06:49.356 [2024-10-13 11:10:30.719736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.356 [2024-10-13 11:10:30.774452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.356  [2024-10-13T11:10:31.217Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:49.615 00:06:49.615 11:10:31 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:49.615 11:10:31 -- dd/basic_rw.sh@23 -- # count=7 00:06:49.615 11:10:31 -- dd/basic_rw.sh@24 -- # count=7 00:06:49.615 11:10:31 -- dd/basic_rw.sh@25 -- # size=57344 00:06:49.615 11:10:31 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:49.615 11:10:31 -- dd/common.sh@98 -- # xtrace_disable 00:06:49.615 11:10:31 -- common/autotest_common.sh@10 -- # set +x 00:06:50.184 11:10:31 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:50.184 11:10:31 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:50.184 11:10:31 -- dd/common.sh@31 -- # xtrace_disable 00:06:50.184 11:10:31 -- common/autotest_common.sh@10 -- # set +x 00:06:50.184 [2024-10-13 11:10:31.607518] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:50.184 [2024-10-13 11:10:31.607801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57777 ] 00:06:50.184 { 00:06:50.184 "subsystems": [ 00:06:50.184 { 00:06:50.184 "subsystem": "bdev", 00:06:50.184 "config": [ 00:06:50.184 { 00:06:50.184 "params": { 00:06:50.184 "trtype": "pcie", 00:06:50.184 "traddr": "0000:00:06.0", 00:06:50.184 "name": "Nvme0" 00:06:50.184 }, 00:06:50.184 "method": "bdev_nvme_attach_controller" 00:06:50.184 }, 00:06:50.184 { 00:06:50.184 "method": "bdev_wait_for_examine" 00:06:50.184 } 00:06:50.184 ] 00:06:50.184 } 00:06:50.184 ] 00:06:50.184 } 00:06:50.184 [2024-10-13 11:10:31.743632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.443 [2024-10-13 11:10:31.795984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.443  [2024-10-13T11:10:32.304Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:50.702 00:06:50.702 11:10:32 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:50.702 11:10:32 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:50.702 11:10:32 -- dd/common.sh@31 -- # xtrace_disable 00:06:50.702 11:10:32 -- common/autotest_common.sh@10 -- # set +x 00:06:50.702 [2024-10-13 11:10:32.124340] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:50.702 [2024-10-13 11:10:32.124421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57790 ] 00:06:50.702 { 00:06:50.702 "subsystems": [ 00:06:50.702 { 00:06:50.702 "subsystem": "bdev", 00:06:50.702 "config": [ 00:06:50.702 { 00:06:50.702 "params": { 00:06:50.702 "trtype": "pcie", 00:06:50.702 "traddr": "0000:00:06.0", 00:06:50.702 "name": "Nvme0" 00:06:50.702 }, 00:06:50.702 "method": "bdev_nvme_attach_controller" 00:06:50.702 }, 00:06:50.702 { 00:06:50.702 "method": "bdev_wait_for_examine" 00:06:50.702 } 00:06:50.702 ] 00:06:50.702 } 00:06:50.702 ] 00:06:50.702 } 00:06:50.702 [2024-10-13 11:10:32.250289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.961 [2024-10-13 11:10:32.305060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.961  [2024-10-13T11:10:32.823Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:51.221 00:06:51.221 11:10:32 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.221 11:10:32 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:51.221 11:10:32 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:51.221 11:10:32 -- dd/common.sh@11 -- # local nvme_ref= 00:06:51.221 11:10:32 -- dd/common.sh@12 -- # local size=57344 00:06:51.221 11:10:32 -- dd/common.sh@14 -- # local bs=1048576 00:06:51.221 11:10:32 -- dd/common.sh@15 -- # local count=1 00:06:51.221 11:10:32 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:51.221 11:10:32 -- dd/common.sh@18 -- # gen_conf 00:06:51.221 11:10:32 -- dd/common.sh@31 -- # xtrace_disable 00:06:51.221 11:10:32 -- common/autotest_common.sh@10 -- # set +x 00:06:51.221 [2024-10-13 11:10:32.644370] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:51.221 [2024-10-13 11:10:32.644460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57805 ] 00:06:51.221 { 00:06:51.221 "subsystems": [ 00:06:51.221 { 00:06:51.221 "subsystem": "bdev", 00:06:51.221 "config": [ 00:06:51.221 { 00:06:51.221 "params": { 00:06:51.221 "trtype": "pcie", 00:06:51.221 "traddr": "0000:00:06.0", 00:06:51.221 "name": "Nvme0" 00:06:51.221 }, 00:06:51.221 "method": "bdev_nvme_attach_controller" 00:06:51.221 }, 00:06:51.221 { 00:06:51.221 "method": "bdev_wait_for_examine" 00:06:51.221 } 00:06:51.221 ] 00:06:51.221 } 00:06:51.221 ] 00:06:51.221 } 00:06:51.221 [2024-10-13 11:10:32.782053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.480 [2024-10-13 11:10:32.832369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.480  [2024-10-13T11:10:33.340Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:51.738 00:06:51.738 11:10:33 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:51.738 11:10:33 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:51.738 11:10:33 -- dd/basic_rw.sh@23 -- # count=3 00:06:51.738 11:10:33 -- dd/basic_rw.sh@24 -- # count=3 00:06:51.738 11:10:33 -- dd/basic_rw.sh@25 -- # size=49152 00:06:51.738 11:10:33 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:51.738 11:10:33 -- dd/common.sh@98 -- # xtrace_disable 00:06:51.738 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:06:51.997 11:10:33 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:51.997 11:10:33 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:51.997 11:10:33 -- dd/common.sh@31 -- # xtrace_disable 00:06:51.997 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:06:52.256 { 00:06:52.256 "subsystems": [ 00:06:52.256 { 00:06:52.256 "subsystem": "bdev", 00:06:52.256 "config": [ 00:06:52.256 { 00:06:52.256 "params": { 00:06:52.256 "trtype": "pcie", 00:06:52.256 "traddr": "0000:00:06.0", 00:06:52.256 "name": "Nvme0" 00:06:52.256 }, 00:06:52.256 "method": "bdev_nvme_attach_controller" 00:06:52.256 }, 00:06:52.256 { 00:06:52.256 "method": "bdev_wait_for_examine" 00:06:52.256 } 00:06:52.256 ] 00:06:52.256 } 00:06:52.256 ] 00:06:52.256 } 00:06:52.256 [2024-10-13 11:10:33.621749] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:52.256 [2024-10-13 11:10:33.622530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57823 ] 00:06:52.256 [2024-10-13 11:10:33.758657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.256 [2024-10-13 11:10:33.806910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.516  [2024-10-13T11:10:34.119Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:52.517 00:06:52.517 11:10:34 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:52.517 11:10:34 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:52.517 11:10:34 -- dd/common.sh@31 -- # xtrace_disable 00:06:52.517 11:10:34 -- common/autotest_common.sh@10 -- # set +x 00:06:52.794 [2024-10-13 11:10:34.150703] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:52.794 [2024-10-13 11:10:34.150992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57831 ] 00:06:52.794 { 00:06:52.794 "subsystems": [ 00:06:52.794 { 00:06:52.794 "subsystem": "bdev", 00:06:52.794 "config": [ 00:06:52.794 { 00:06:52.794 "params": { 00:06:52.794 "trtype": "pcie", 00:06:52.794 "traddr": "0000:00:06.0", 00:06:52.794 "name": "Nvme0" 00:06:52.794 }, 00:06:52.794 "method": "bdev_nvme_attach_controller" 00:06:52.794 }, 00:06:52.794 { 00:06:52.794 "method": "bdev_wait_for_examine" 00:06:52.794 } 00:06:52.794 ] 00:06:52.794 } 00:06:52.794 ] 00:06:52.794 } 00:06:52.794 [2024-10-13 11:10:34.287558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.794 [2024-10-13 11:10:34.339637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.065  [2024-10-13T11:10:34.667Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:53.065 00:06:53.065 11:10:34 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.065 11:10:34 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:53.065 11:10:34 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:53.065 11:10:34 -- dd/common.sh@11 -- # local nvme_ref= 00:06:53.065 11:10:34 -- dd/common.sh@12 -- # local size=49152 00:06:53.065 11:10:34 -- dd/common.sh@14 -- # local bs=1048576 00:06:53.065 11:10:34 -- dd/common.sh@15 -- # local count=1 00:06:53.065 11:10:34 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:53.065 11:10:34 -- dd/common.sh@18 -- # gen_conf 00:06:53.065 11:10:34 -- dd/common.sh@31 -- # xtrace_disable 00:06:53.065 11:10:34 -- common/autotest_common.sh@10 -- # set +x 00:06:53.325 [2024-10-13 11:10:34.687411] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:53.325 [2024-10-13 11:10:34.687532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57850 ] 00:06:53.325 { 00:06:53.325 "subsystems": [ 00:06:53.325 { 00:06:53.325 "subsystem": "bdev", 00:06:53.325 "config": [ 00:06:53.325 { 00:06:53.325 "params": { 00:06:53.325 "trtype": "pcie", 00:06:53.325 "traddr": "0000:00:06.0", 00:06:53.325 "name": "Nvme0" 00:06:53.325 }, 00:06:53.325 "method": "bdev_nvme_attach_controller" 00:06:53.325 }, 00:06:53.325 { 00:06:53.325 "method": "bdev_wait_for_examine" 00:06:53.325 } 00:06:53.325 ] 00:06:53.325 } 00:06:53.325 ] 00:06:53.325 } 00:06:53.325 [2024-10-13 11:10:34.818496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.325 [2024-10-13 11:10:34.868962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.584  [2024-10-13T11:10:35.186Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:53.584 00:06:53.842 11:10:35 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:53.842 11:10:35 -- dd/basic_rw.sh@23 -- # count=3 00:06:53.842 11:10:35 -- dd/basic_rw.sh@24 -- # count=3 00:06:53.842 11:10:35 -- dd/basic_rw.sh@25 -- # size=49152 00:06:53.842 11:10:35 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:53.842 11:10:35 -- dd/common.sh@98 -- # xtrace_disable 00:06:53.842 11:10:35 -- common/autotest_common.sh@10 -- # set +x 00:06:54.101 11:10:35 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:54.101 11:10:35 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:54.101 11:10:35 -- dd/common.sh@31 -- # xtrace_disable 00:06:54.101 11:10:35 -- common/autotest_common.sh@10 -- # set +x 00:06:54.359 [2024-10-13 11:10:35.711414] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:54.360 [2024-10-13 11:10:35.711826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57868 ] 00:06:54.360 { 00:06:54.360 "subsystems": [ 00:06:54.360 { 00:06:54.360 "subsystem": "bdev", 00:06:54.360 "config": [ 00:06:54.360 { 00:06:54.360 "params": { 00:06:54.360 "trtype": "pcie", 00:06:54.360 "traddr": "0000:00:06.0", 00:06:54.360 "name": "Nvme0" 00:06:54.360 }, 00:06:54.360 "method": "bdev_nvme_attach_controller" 00:06:54.360 }, 00:06:54.360 { 00:06:54.360 "method": "bdev_wait_for_examine" 00:06:54.360 } 00:06:54.360 ] 00:06:54.360 } 00:06:54.360 ] 00:06:54.360 } 00:06:54.360 [2024-10-13 11:10:35.851275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.360 [2024-10-13 11:10:35.901095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.619  [2024-10-13T11:10:36.221Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:54.619 00:06:54.619 11:10:36 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:54.619 11:10:36 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:54.619 11:10:36 -- dd/common.sh@31 -- # xtrace_disable 00:06:54.619 11:10:36 -- common/autotest_common.sh@10 -- # set +x 00:06:54.878 [2024-10-13 11:10:36.257361] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:54.878 [2024-10-13 11:10:36.257454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57881 ] 00:06:54.878 { 00:06:54.878 "subsystems": [ 00:06:54.878 { 00:06:54.878 "subsystem": "bdev", 00:06:54.878 "config": [ 00:06:54.878 { 00:06:54.878 "params": { 00:06:54.878 "trtype": "pcie", 00:06:54.878 "traddr": "0000:00:06.0", 00:06:54.878 "name": "Nvme0" 00:06:54.878 }, 00:06:54.878 "method": "bdev_nvme_attach_controller" 00:06:54.878 }, 00:06:54.878 { 00:06:54.878 "method": "bdev_wait_for_examine" 00:06:54.878 } 00:06:54.878 ] 00:06:54.878 } 00:06:54.878 ] 00:06:54.878 } 00:06:54.878 [2024-10-13 11:10:36.395176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.878 [2024-10-13 11:10:36.449391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.137  [2024-10-13T11:10:36.999Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:55.397 00:06:55.397 11:10:36 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.397 11:10:36 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:55.397 11:10:36 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:55.397 11:10:36 -- dd/common.sh@11 -- # local nvme_ref= 00:06:55.397 11:10:36 -- dd/common.sh@12 -- # local size=49152 00:06:55.397 11:10:36 -- dd/common.sh@14 -- # local bs=1048576 00:06:55.397 11:10:36 -- dd/common.sh@15 -- # local count=1 00:06:55.397 11:10:36 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:55.397 11:10:36 -- dd/common.sh@18 -- # gen_conf 00:06:55.397 11:10:36 -- dd/common.sh@31 -- # xtrace_disable 00:06:55.397 11:10:36 -- common/autotest_common.sh@10 -- # set +x 00:06:55.397 [2024-10-13 11:10:36.809834] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:55.397 [2024-10-13 11:10:36.810186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57894 ] 00:06:55.397 { 00:06:55.397 "subsystems": [ 00:06:55.397 { 00:06:55.397 "subsystem": "bdev", 00:06:55.397 "config": [ 00:06:55.397 { 00:06:55.397 "params": { 00:06:55.397 "trtype": "pcie", 00:06:55.397 "traddr": "0000:00:06.0", 00:06:55.397 "name": "Nvme0" 00:06:55.397 }, 00:06:55.397 "method": "bdev_nvme_attach_controller" 00:06:55.397 }, 00:06:55.397 { 00:06:55.397 "method": "bdev_wait_for_examine" 00:06:55.397 } 00:06:55.397 ] 00:06:55.397 } 00:06:55.397 ] 00:06:55.397 } 00:06:55.397 [2024-10-13 11:10:36.939890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.397 [2024-10-13 11:10:36.992561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.656  [2024-10-13T11:10:37.518Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:55.916 00:06:55.916 00:06:55.916 real 0m12.547s 00:06:55.916 user 0m9.395s 00:06:55.916 sys 0m2.020s 00:06:55.916 ************************************ 00:06:55.916 END TEST dd_rw 00:06:55.916 ************************************ 00:06:55.916 11:10:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.916 11:10:37 -- common/autotest_common.sh@10 -- # set +x 00:06:55.916 11:10:37 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:55.916 11:10:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:55.916 11:10:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.916 11:10:37 -- common/autotest_common.sh@10 -- # set +x 00:06:55.916 ************************************ 00:06:55.916 START TEST dd_rw_offset 00:06:55.916 ************************************ 00:06:55.916 11:10:37 -- common/autotest_common.sh@1104 -- # basic_offset 00:06:55.916 11:10:37 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:55.916 11:10:37 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:55.916 11:10:37 -- dd/common.sh@98 -- # xtrace_disable 00:06:55.916 11:10:37 -- common/autotest_common.sh@10 -- # set +x 00:06:55.916 11:10:37 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:55.916 11:10:37 -- dd/basic_rw.sh@56 -- # data=mnuwj4oqw3tvmr1vp9zhgmch8jbirivger7hqicx841iuws30lgbip9b0502jfkzmn5rdszpiv508nv5ia6cr7ogbt9dld107cgto4wstsoc3uzt07xcdz8jhhiihz18emy1krja13a66bugj56uefghbfw00ykiayjdvjbopeb8hir57rftsjx84qjwg588k8iivagssxkonlkp7ocpfhhtewhha2v5jc35n2y2opvzcgp977auto456wl67hiknwmxt7tml7xt4ixycjv6jdpx8t6lv03ais3si5aht91tfpd3lf3luyeq18dpuj84zo9jx1s0ibyuwz3ydkcddt6jollrhd58763elkis215guwd54b93ot83ndpmvyx6i850uqrnk38ol8q3qmncb3327lmgeso4gu8sti9omiqnmxxzl5jsvjkug8ci9ne4vt9qez08o8u9s4fbjjuhljaneaatnh1z7b6js2pvgjg1rahfzo18po5ahybifgm3ff5ketyksyn7nuby28c0pbsnzdff9zsrhaomsc8w4ay6nupkqqvre2y3cicy7w3axkwtiel2yj1o0ycas03g3hjew7n8h9vm4ix8mwk0opbm93bkapfxof0ocbkclxakqfbr27y0dwnak1lkp2ujhwb0jew6enk8w4auquuhj6suzo5t5935qnoem0q5jycowcl2dvyy98yyksq8va2ll1bkf58umrolwi4h5dynqr0h1d5qpbdlw4o67s4pxl6hcstdabnj2pyic7s4ilurtjqm3j6fwj35bbqiley6q5nwlicy98fperg0ttrflpumqpgrrtltljor174upxq2gz773fm29oz6fpnvto4ydoqn5sbqsjbpnxw1b69p9tqs75p2t1jaljnabjgnb3x6ys9knf3ilmateejyan0roadop5hpdk4yba7uw2rwxftjopdjzswp1cps6550bafbjgc9cm33agjg63n0hrmsihoyuzkixipvac0hz6t7sah8dyi51ibl97vboor16glyih17ywanv1v6zmq0w6h9pf9rd7wu528xkwbe750qza2d2p8wser1qrfe72pm9j97uo1kwisowdumgcipp4encq9lc0rkhgp67ajsu2kz3fq846cj4gr1ro0wgq24q01jc007h5sefj7ybo0qddp4wzj7im3kls51rnn9lbephjvq7bwvwbxajynn0clpbb94xansju5dewdxbr80ufywws1q9ggsv0dq6sfl6q5xcikb7awsqrks2beoh1mao8f7979dcwxejqadkujiy2e6w9vrcyp9dslts04hfm4r562owd6mppkoqb3s3ytso3p9r1ftbsnln6hbnx2db4k09tavymkl9sln6tfej94g7lb3g2tkvhs4hydfv9bvdpbwifxwjdxm1vhuaewnhrenpv11w2sf1mzktephtntu9yh5w2w99u61a8iricmjtn588s24q74chsa1lqsie1qkw2dijf5gze47jua6o180og9jiuqb0kt1qrhq72vh7rysf0r8ia01o6hf2ghku4nc16n60olvvqz9q2jwbt9lc3ni4ludbq3pd1ymwsf4ij2toygts9wliwiqzpe1dpcmqohkt9ld5hfp6iin0ffwu0nimzaunbrnvf7dvkajvh5q8noj7oly1q3kzlsb7kabefbjzx617y3fajzvc7t1husc39n2ct15u4fj1fp83guh5i3bfc5vns85njpl9a2fjnfasj3qydseeyeko8jwsx4e0nfz8l9l4pnrhsi2ew4kwkbpna4fyraz5ki6hckpcwblkvnc4c0digy53kghhpbk6liyztn2kpxkp8edtok2wpxbdn20zw9f65pc5oeo75nsmdh35qsam2bqzp48s3rvundyvato34t5t9fsknyjk312bfpyfi6h7konc6on5x9pqeamx5ft2icv6huib8x6qzd1rapd7i1n3gh88zzrka9w0naf616rp25pehkgp1jbqodvx5c1ktkc05kxld44s4v27j4b33pw6cur2rhtw8vjkao6jcb32xc7w3ce1oazcz74gr2ypw6q0qlrr8wrry625bfag0w8v1v0rarlr7awobnfoy3wojxh07je6ewzjvycoywdr5lzxfsoqzslp120exh82dn7ucxzroohvohjtq8csdqck8ky24p7g5wrk63oflv4c2dfmj5ugpq5hpu7s5d4165jdtrtmzhkebuavu29f0waz8u7zdu134im1ivoqg74wbafsvrfc4ikqu4jr3duiqmd9ytvzs121fwngi3af8yonp6wrwuxcxxhac87afkj73fi559gvk4ncdowwgv89lry4k6iors6ix195qih278pp60id1txmg5gzc2i9cjxqcy4d7eoiy5eslalo28qdwbsae8oqjkvsfcwdcztu4qdqly6gqcsla6yz2u34qz8dn0yncuyfflopwuwvp05fj6r4yzz5bsdowp22k213ji3tq0wln8sbqcom8b1acbc35ks2onlvi4lm700mcw0wf9xgi7rb8aykzg9ultlai8m73qeno9i4qhd2eadm7ju7t1gdjcdpa8yknzi9wgh2xbl2a3wdu0ztr7dvvbyjov7pbrnbk14mwxhndzoh7hfgcghfdwrgwdpn9gbrabfx3bu4mrlo9h4is211gm9qznxowqs9rvbj131dve2i2u59ujn0gwo5qfrza5uid3msym9eolzlzxd9coqqjdrmh8xuasnv78kb6y9qliqyqh1itdwhq6032s78lyj9wtfkra22e4wz72v4la41pqc67uxm5jamhzz5hkojch7mz0idq9ja1ymfy8m48i2laerrrvah9agfb0icbzgip4fwqngf136405h9wmq8o5iunq7a6nim9erlnlwrzw719h80me3u6q4q4qp1gibcgo9kh12zyp54q86xpbu4ld4bre26e3yni9vl0til5k2w9o32jgmcc56vpa146jbj94nzlwc3eownufxcxoz3flq6r6wbraqpcj4hl5l1gwnc88p3pvnivbaqguzaw1awo78x34leipo1mnsay2g7nb3k2es6dzwsqzypmrxymvvpskcjgieyekk9wmhc9mmcor9g2lu9zdposo84by2p0p458lbewzhigbu8xom1w0ayl6tcxe8aid4ultnor0fazd9o90wlv0av3r5n1pnxzqv4y84dkstfqq9cbsgy7ltw60h5xx7mwwegv6lg53u39egy2c69iz1149xlhxpjilfans0kfe45cgnsueyccjplkbzewjobtwq5liq55bbx9omw293r56kqrond6yyt06ru3k5izn7pkf5eszqjmpyo20xm1g1idwklb92kk99x71e0h8l0440d2fz0sqhlgva4jd9d4i6ynwtzdpsec615crxjy1b6vjfwttvip4ryjab1mop9fo68jx2334ft2sbrfwooky3oxjwmd5ahq9a8mt6oozkt63ekwztnlomdie5iccwhef1vjplt101m4nfzbc9ve1rkzqatczo8v5ixlzafrvnv442o93lj82s37ojy3y5qnvhaytf7vlwvfuruw2ffhh6wi1s1li9n8ejru03o212wdko5tc6tm70ku0r2q4460i32z9lkjob973qmou5wi3vq08wntx96h1hij2xshenb0cvc5oqys1x2yk4ab69mi4k1tmjwzqpgtgw1sfhrw5hbg137tik49wpwl225cc95bh0j1bya2hmg4qizdh4ixf6oumaukst0zapwq3w28d814fd8u940vm9sruxjunjt2mwtki1niar8e3do17o4fsw3t63gos8h4h1umnsbw09bp0wddn9arhxhevxbvfoveiodz5f9c3dc73dt7hnap529thjml3dt1wtr23r9xhqn5u9my5k0t3h7ofpay9gk7slo76vjsqdms7lgnjtiyda0uzxgt5xcdmkz130e3n3dfhqodu6c2cblarqv7ltgydncadh1jqv0irurhf6vmfdbgu8o23pod8cu20hssfnst9kgm16 00:06:55.916 11:10:37 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:55.916 11:10:37 -- dd/basic_rw.sh@59 -- # gen_conf 00:06:55.916 11:10:37 -- dd/common.sh@31 -- # xtrace_disable 00:06:55.916 11:10:37 -- common/autotest_common.sh@10 -- # set +x 00:06:55.916 [2024-10-13 11:10:37.485524] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:55.916 [2024-10-13 11:10:37.485916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57924 ] 00:06:55.916 { 00:06:55.916 "subsystems": [ 00:06:55.916 { 00:06:55.916 "subsystem": "bdev", 00:06:55.916 "config": [ 00:06:55.916 { 00:06:55.916 "params": { 00:06:55.916 "trtype": "pcie", 00:06:55.916 "traddr": "0000:00:06.0", 00:06:55.916 "name": "Nvme0" 00:06:55.916 }, 00:06:55.916 "method": "bdev_nvme_attach_controller" 00:06:55.916 }, 00:06:55.916 { 00:06:55.916 "method": "bdev_wait_for_examine" 00:06:55.916 } 00:06:55.916 ] 00:06:55.916 } 00:06:55.916 ] 00:06:55.916 } 00:06:56.176 [2024-10-13 11:10:37.626966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.176 [2024-10-13 11:10:37.696131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.435  [2024-10-13T11:10:38.037Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:56.435 00:06:56.435 11:10:38 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:56.435 11:10:38 -- dd/basic_rw.sh@65 -- # gen_conf 00:06:56.435 11:10:38 -- dd/common.sh@31 -- # xtrace_disable 00:06:56.435 11:10:38 -- common/autotest_common.sh@10 -- # set +x 00:06:56.694 [2024-10-13 11:10:38.076218] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:56.694 [2024-10-13 11:10:38.076355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57936 ] 00:06:56.694 { 00:06:56.694 "subsystems": [ 00:06:56.694 { 00:06:56.694 "subsystem": "bdev", 00:06:56.694 "config": [ 00:06:56.694 { 00:06:56.694 "params": { 00:06:56.694 "trtype": "pcie", 00:06:56.694 "traddr": "0000:00:06.0", 00:06:56.694 "name": "Nvme0" 00:06:56.694 }, 00:06:56.694 "method": "bdev_nvme_attach_controller" 00:06:56.694 }, 00:06:56.694 { 00:06:56.694 "method": "bdev_wait_for_examine" 00:06:56.694 } 00:06:56.694 ] 00:06:56.694 } 00:06:56.694 ] 00:06:56.694 } 00:06:56.694 [2024-10-13 11:10:38.216942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.694 [2024-10-13 11:10:38.267446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.953  [2024-10-13T11:10:38.555Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:56.953 00:06:56.953 11:10:38 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:56.954 11:10:38 -- dd/basic_rw.sh@72 -- # [[ mnuwj4oqw3tvmr1vp9zhgmch8jbirivger7hqicx841iuws30lgbip9b0502jfkzmn5rdszpiv508nv5ia6cr7ogbt9dld107cgto4wstsoc3uzt07xcdz8jhhiihz18emy1krja13a66bugj56uefghbfw00ykiayjdvjbopeb8hir57rftsjx84qjwg588k8iivagssxkonlkp7ocpfhhtewhha2v5jc35n2y2opvzcgp977auto456wl67hiknwmxt7tml7xt4ixycjv6jdpx8t6lv03ais3si5aht91tfpd3lf3luyeq18dpuj84zo9jx1s0ibyuwz3ydkcddt6jollrhd58763elkis215guwd54b93ot83ndpmvyx6i850uqrnk38ol8q3qmncb3327lmgeso4gu8sti9omiqnmxxzl5jsvjkug8ci9ne4vt9qez08o8u9s4fbjjuhljaneaatnh1z7b6js2pvgjg1rahfzo18po5ahybifgm3ff5ketyksyn7nuby28c0pbsnzdff9zsrhaomsc8w4ay6nupkqqvre2y3cicy7w3axkwtiel2yj1o0ycas03g3hjew7n8h9vm4ix8mwk0opbm93bkapfxof0ocbkclxakqfbr27y0dwnak1lkp2ujhwb0jew6enk8w4auquuhj6suzo5t5935qnoem0q5jycowcl2dvyy98yyksq8va2ll1bkf58umrolwi4h5dynqr0h1d5qpbdlw4o67s4pxl6hcstdabnj2pyic7s4ilurtjqm3j6fwj35bbqiley6q5nwlicy98fperg0ttrflpumqpgrrtltljor174upxq2gz773fm29oz6fpnvto4ydoqn5sbqsjbpnxw1b69p9tqs75p2t1jaljnabjgnb3x6ys9knf3ilmateejyan0roadop5hpdk4yba7uw2rwxftjopdjzswp1cps6550bafbjgc9cm33agjg63n0hrmsihoyuzkixipvac0hz6t7sah8dyi51ibl97vboor16glyih17ywanv1v6zmq0w6h9pf9rd7wu528xkwbe750qza2d2p8wser1qrfe72pm9j97uo1kwisowdumgcipp4encq9lc0rkhgp67ajsu2kz3fq846cj4gr1ro0wgq24q01jc007h5sefj7ybo0qddp4wzj7im3kls51rnn9lbephjvq7bwvwbxajynn0clpbb94xansju5dewdxbr80ufywws1q9ggsv0dq6sfl6q5xcikb7awsqrks2beoh1mao8f7979dcwxejqadkujiy2e6w9vrcyp9dslts04hfm4r562owd6mppkoqb3s3ytso3p9r1ftbsnln6hbnx2db4k09tavymkl9sln6tfej94g7lb3g2tkvhs4hydfv9bvdpbwifxwjdxm1vhuaewnhrenpv11w2sf1mzktephtntu9yh5w2w99u61a8iricmjtn588s24q74chsa1lqsie1qkw2dijf5gze47jua6o180og9jiuqb0kt1qrhq72vh7rysf0r8ia01o6hf2ghku4nc16n60olvvqz9q2jwbt9lc3ni4ludbq3pd1ymwsf4ij2toygts9wliwiqzpe1dpcmqohkt9ld5hfp6iin0ffwu0nimzaunbrnvf7dvkajvh5q8noj7oly1q3kzlsb7kabefbjzx617y3fajzvc7t1husc39n2ct15u4fj1fp83guh5i3bfc5vns85njpl9a2fjnfasj3qydseeyeko8jwsx4e0nfz8l9l4pnrhsi2ew4kwkbpna4fyraz5ki6hckpcwblkvnc4c0digy53kghhpbk6liyztn2kpxkp8edtok2wpxbdn20zw9f65pc5oeo75nsmdh35qsam2bqzp48s3rvundyvato34t5t9fsknyjk312bfpyfi6h7konc6on5x9pqeamx5ft2icv6huib8x6qzd1rapd7i1n3gh88zzrka9w0naf616rp25pehkgp1jbqodvx5c1ktkc05kxld44s4v27j4b33pw6cur2rhtw8vjkao6jcb32xc7w3ce1oazcz74gr2ypw6q0qlrr8wrry625bfag0w8v1v0rarlr7awobnfoy3wojxh07je6ewzjvycoywdr5lzxfsoqzslp120exh82dn7ucxzroohvohjtq8csdqck8ky24p7g5wrk63oflv4c2dfmj5ugpq5hpu7s5d4165jdtrtmzhkebuavu29f0waz8u7zdu134im1ivoqg74wbafsvrfc4ikqu4jr3duiqmd9ytvzs121fwngi3af8yonp6wrwuxcxxhac87afkj73fi559gvk4ncdowwgv89lry4k6iors6ix195qih278pp60id1txmg5gzc2i9cjxqcy4d7eoiy5eslalo28qdwbsae8oqjkvsfcwdcztu4qdqly6gqcsla6yz2u34qz8dn0yncuyfflopwuwvp05fj6r4yzz5bsdowp22k213ji3tq0wln8sbqcom8b1acbc35ks2onlvi4lm700mcw0wf9xgi7rb8aykzg9ultlai8m73qeno9i4qhd2eadm7ju7t1gdjcdpa8yknzi9wgh2xbl2a3wdu0ztr7dvvbyjov7pbrnbk14mwxhndzoh7hfgcghfdwrgwdpn9gbrabfx3bu4mrlo9h4is211gm9qznxowqs9rvbj131dve2i2u59ujn0gwo5qfrza5uid3msym9eolzlzxd9coqqjdrmh8xuasnv78kb6y9qliqyqh1itdwhq6032s78lyj9wtfkra22e4wz72v4la41pqc67uxm5jamhzz5hkojch7mz0idq9ja1ymfy8m48i2laerrrvah9agfb0icbzgip4fwqngf136405h9wmq8o5iunq7a6nim9erlnlwrzw719h80me3u6q4q4qp1gibcgo9kh12zyp54q86xpbu4ld4bre26e3yni9vl0til5k2w9o32jgmcc56vpa146jbj94nzlwc3eownufxcxoz3flq6r6wbraqpcj4hl5l1gwnc88p3pvnivbaqguzaw1awo78x34leipo1mnsay2g7nb3k2es6dzwsqzypmrxymvvpskcjgieyekk9wmhc9mmcor9g2lu9zdposo84by2p0p458lbewzhigbu8xom1w0ayl6tcxe8aid4ultnor0fazd9o90wlv0av3r5n1pnxzqv4y84dkstfqq9cbsgy7ltw60h5xx7mwwegv6lg53u39egy2c69iz1149xlhxpjilfans0kfe45cgnsueyccjplkbzewjobtwq5liq55bbx9omw293r56kqrond6yyt06ru3k5izn7pkf5eszqjmpyo20xm1g1idwklb92kk99x71e0h8l0440d2fz0sqhlgva4jd9d4i6ynwtzdpsec615crxjy1b6vjfwttvip4ryjab1mop9fo68jx2334ft2sbrfwooky3oxjwmd5ahq9a8mt6oozkt63ekwztnlomdie5iccwhef1vjplt101m4nfzbc9ve1rkzqatczo8v5ixlzafrvnv442o93lj82s37ojy3y5qnvhaytf7vlwvfuruw2ffhh6wi1s1li9n8ejru03o212wdko5tc6tm70ku0r2q4460i32z9lkjob973qmou5wi3vq08wntx96h1hij2xshenb0cvc5oqys1x2yk4ab69mi4k1tmjwzqpgtgw1sfhrw5hbg137tik49wpwl225cc95bh0j1bya2hmg4qizdh4ixf6oumaukst0zapwq3w28d814fd8u940vm9sruxjunjt2mwtki1niar8e3do17o4fsw3t63gos8h4h1umnsbw09bp0wddn9arhxhevxbvfoveiodz5f9c3dc73dt7hnap529thjml3dt1wtr23r9xhqn5u9my5k0t3h7ofpay9gk7slo76vjsqdms7lgnjtiyda0uzxgt5xcdmkz130e3n3dfhqodu6c2cblarqv7ltgydncadh1jqv0irurhf6vmfdbgu8o23pod8cu20hssfnst9kgm16 == \m\n\u\w\j\4\o\q\w\3\t\v\m\r\1\v\p\9\z\h\g\m\c\h\8\j\b\i\r\i\v\g\e\r\7\h\q\i\c\x\8\4\1\i\u\w\s\3\0\l\g\b\i\p\9\b\0\5\0\2\j\f\k\z\m\n\5\r\d\s\z\p\i\v\5\0\8\n\v\5\i\a\6\c\r\7\o\g\b\t\9\d\l\d\1\0\7\c\g\t\o\4\w\s\t\s\o\c\3\u\z\t\0\7\x\c\d\z\8\j\h\h\i\i\h\z\1\8\e\m\y\1\k\r\j\a\1\3\a\6\6\b\u\g\j\5\6\u\e\f\g\h\b\f\w\0\0\y\k\i\a\y\j\d\v\j\b\o\p\e\b\8\h\i\r\5\7\r\f\t\s\j\x\8\4\q\j\w\g\5\8\8\k\8\i\i\v\a\g\s\s\x\k\o\n\l\k\p\7\o\c\p\f\h\h\t\e\w\h\h\a\2\v\5\j\c\3\5\n\2\y\2\o\p\v\z\c\g\p\9\7\7\a\u\t\o\4\5\6\w\l\6\7\h\i\k\n\w\m\x\t\7\t\m\l\7\x\t\4\i\x\y\c\j\v\6\j\d\p\x\8\t\6\l\v\0\3\a\i\s\3\s\i\5\a\h\t\9\1\t\f\p\d\3\l\f\3\l\u\y\e\q\1\8\d\p\u\j\8\4\z\o\9\j\x\1\s\0\i\b\y\u\w\z\3\y\d\k\c\d\d\t\6\j\o\l\l\r\h\d\5\8\7\6\3\e\l\k\i\s\2\1\5\g\u\w\d\5\4\b\9\3\o\t\8\3\n\d\p\m\v\y\x\6\i\8\5\0\u\q\r\n\k\3\8\o\l\8\q\3\q\m\n\c\b\3\3\2\7\l\m\g\e\s\o\4\g\u\8\s\t\i\9\o\m\i\q\n\m\x\x\z\l\5\j\s\v\j\k\u\g\8\c\i\9\n\e\4\v\t\9\q\e\z\0\8\o\8\u\9\s\4\f\b\j\j\u\h\l\j\a\n\e\a\a\t\n\h\1\z\7\b\6\j\s\2\p\v\g\j\g\1\r\a\h\f\z\o\1\8\p\o\5\a\h\y\b\i\f\g\m\3\f\f\5\k\e\t\y\k\s\y\n\7\n\u\b\y\2\8\c\0\p\b\s\n\z\d\f\f\9\z\s\r\h\a\o\m\s\c\8\w\4\a\y\6\n\u\p\k\q\q\v\r\e\2\y\3\c\i\c\y\7\w\3\a\x\k\w\t\i\e\l\2\y\j\1\o\0\y\c\a\s\0\3\g\3\h\j\e\w\7\n\8\h\9\v\m\4\i\x\8\m\w\k\0\o\p\b\m\9\3\b\k\a\p\f\x\o\f\0\o\c\b\k\c\l\x\a\k\q\f\b\r\2\7\y\0\d\w\n\a\k\1\l\k\p\2\u\j\h\w\b\0\j\e\w\6\e\n\k\8\w\4\a\u\q\u\u\h\j\6\s\u\z\o\5\t\5\9\3\5\q\n\o\e\m\0\q\5\j\y\c\o\w\c\l\2\d\v\y\y\9\8\y\y\k\s\q\8\v\a\2\l\l\1\b\k\f\5\8\u\m\r\o\l\w\i\4\h\5\d\y\n\q\r\0\h\1\d\5\q\p\b\d\l\w\4\o\6\7\s\4\p\x\l\6\h\c\s\t\d\a\b\n\j\2\p\y\i\c\7\s\4\i\l\u\r\t\j\q\m\3\j\6\f\w\j\3\5\b\b\q\i\l\e\y\6\q\5\n\w\l\i\c\y\9\8\f\p\e\r\g\0\t\t\r\f\l\p\u\m\q\p\g\r\r\t\l\t\l\j\o\r\1\7\4\u\p\x\q\2\g\z\7\7\3\f\m\2\9\o\z\6\f\p\n\v\t\o\4\y\d\o\q\n\5\s\b\q\s\j\b\p\n\x\w\1\b\6\9\p\9\t\q\s\7\5\p\2\t\1\j\a\l\j\n\a\b\j\g\n\b\3\x\6\y\s\9\k\n\f\3\i\l\m\a\t\e\e\j\y\a\n\0\r\o\a\d\o\p\5\h\p\d\k\4\y\b\a\7\u\w\2\r\w\x\f\t\j\o\p\d\j\z\s\w\p\1\c\p\s\6\5\5\0\b\a\f\b\j\g\c\9\c\m\3\3\a\g\j\g\6\3\n\0\h\r\m\s\i\h\o\y\u\z\k\i\x\i\p\v\a\c\0\h\z\6\t\7\s\a\h\8\d\y\i\5\1\i\b\l\9\7\v\b\o\o\r\1\6\g\l\y\i\h\1\7\y\w\a\n\v\1\v\6\z\m\q\0\w\6\h\9\p\f\9\r\d\7\w\u\5\2\8\x\k\w\b\e\7\5\0\q\z\a\2\d\2\p\8\w\s\e\r\1\q\r\f\e\7\2\p\m\9\j\9\7\u\o\1\k\w\i\s\o\w\d\u\m\g\c\i\p\p\4\e\n\c\q\9\l\c\0\r\k\h\g\p\6\7\a\j\s\u\2\k\z\3\f\q\8\4\6\c\j\4\g\r\1\r\o\0\w\g\q\2\4\q\0\1\j\c\0\0\7\h\5\s\e\f\j\7\y\b\o\0\q\d\d\p\4\w\z\j\7\i\m\3\k\l\s\5\1\r\n\n\9\l\b\e\p\h\j\v\q\7\b\w\v\w\b\x\a\j\y\n\n\0\c\l\p\b\b\9\4\x\a\n\s\j\u\5\d\e\w\d\x\b\r\8\0\u\f\y\w\w\s\1\q\9\g\g\s\v\0\d\q\6\s\f\l\6\q\5\x\c\i\k\b\7\a\w\s\q\r\k\s\2\b\e\o\h\1\m\a\o\8\f\7\9\7\9\d\c\w\x\e\j\q\a\d\k\u\j\i\y\2\e\6\w\9\v\r\c\y\p\9\d\s\l\t\s\0\4\h\f\m\4\r\5\6\2\o\w\d\6\m\p\p\k\o\q\b\3\s\3\y\t\s\o\3\p\9\r\1\f\t\b\s\n\l\n\6\h\b\n\x\2\d\b\4\k\0\9\t\a\v\y\m\k\l\9\s\l\n\6\t\f\e\j\9\4\g\7\l\b\3\g\2\t\k\v\h\s\4\h\y\d\f\v\9\b\v\d\p\b\w\i\f\x\w\j\d\x\m\1\v\h\u\a\e\w\n\h\r\e\n\p\v\1\1\w\2\s\f\1\m\z\k\t\e\p\h\t\n\t\u\9\y\h\5\w\2\w\9\9\u\6\1\a\8\i\r\i\c\m\j\t\n\5\8\8\s\2\4\q\7\4\c\h\s\a\1\l\q\s\i\e\1\q\k\w\2\d\i\j\f\5\g\z\e\4\7\j\u\a\6\o\1\8\0\o\g\9\j\i\u\q\b\0\k\t\1\q\r\h\q\7\2\v\h\7\r\y\s\f\0\r\8\i\a\0\1\o\6\h\f\2\g\h\k\u\4\n\c\1\6\n\6\0\o\l\v\v\q\z\9\q\2\j\w\b\t\9\l\c\3\n\i\4\l\u\d\b\q\3\p\d\1\y\m\w\s\f\4\i\j\2\t\o\y\g\t\s\9\w\l\i\w\i\q\z\p\e\1\d\p\c\m\q\o\h\k\t\9\l\d\5\h\f\p\6\i\i\n\0\f\f\w\u\0\n\i\m\z\a\u\n\b\r\n\v\f\7\d\v\k\a\j\v\h\5\q\8\n\o\j\7\o\l\y\1\q\3\k\z\l\s\b\7\k\a\b\e\f\b\j\z\x\6\1\7\y\3\f\a\j\z\v\c\7\t\1\h\u\s\c\3\9\n\2\c\t\1\5\u\4\f\j\1\f\p\8\3\g\u\h\5\i\3\b\f\c\5\v\n\s\8\5\n\j\p\l\9\a\2\f\j\n\f\a\s\j\3\q\y\d\s\e\e\y\e\k\o\8\j\w\s\x\4\e\0\n\f\z\8\l\9\l\4\p\n\r\h\s\i\2\e\w\4\k\w\k\b\p\n\a\4\f\y\r\a\z\5\k\i\6\h\c\k\p\c\w\b\l\k\v\n\c\4\c\0\d\i\g\y\5\3\k\g\h\h\p\b\k\6\l\i\y\z\t\n\2\k\p\x\k\p\8\e\d\t\o\k\2\w\p\x\b\d\n\2\0\z\w\9\f\6\5\p\c\5\o\e\o\7\5\n\s\m\d\h\3\5\q\s\a\m\2\b\q\z\p\4\8\s\3\r\v\u\n\d\y\v\a\t\o\3\4\t\5\t\9\f\s\k\n\y\j\k\3\1\2\b\f\p\y\f\i\6\h\7\k\o\n\c\6\o\n\5\x\9\p\q\e\a\m\x\5\f\t\2\i\c\v\6\h\u\i\b\8\x\6\q\z\d\1\r\a\p\d\7\i\1\n\3\g\h\8\8\z\z\r\k\a\9\w\0\n\a\f\6\1\6\r\p\2\5\p\e\h\k\g\p\1\j\b\q\o\d\v\x\5\c\1\k\t\k\c\0\5\k\x\l\d\4\4\s\4\v\2\7\j\4\b\3\3\p\w\6\c\u\r\2\r\h\t\w\8\v\j\k\a\o\6\j\c\b\3\2\x\c\7\w\3\c\e\1\o\a\z\c\z\7\4\g\r\2\y\p\w\6\q\0\q\l\r\r\8\w\r\r\y\6\2\5\b\f\a\g\0\w\8\v\1\v\0\r\a\r\l\r\7\a\w\o\b\n\f\o\y\3\w\o\j\x\h\0\7\j\e\6\e\w\z\j\v\y\c\o\y\w\d\r\5\l\z\x\f\s\o\q\z\s\l\p\1\2\0\e\x\h\8\2\d\n\7\u\c\x\z\r\o\o\h\v\o\h\j\t\q\8\c\s\d\q\c\k\8\k\y\2\4\p\7\g\5\w\r\k\6\3\o\f\l\v\4\c\2\d\f\m\j\5\u\g\p\q\5\h\p\u\7\s\5\d\4\1\6\5\j\d\t\r\t\m\z\h\k\e\b\u\a\v\u\2\9\f\0\w\a\z\8\u\7\z\d\u\1\3\4\i\m\1\i\v\o\q\g\7\4\w\b\a\f\s\v\r\f\c\4\i\k\q\u\4\j\r\3\d\u\i\q\m\d\9\y\t\v\z\s\1\2\1\f\w\n\g\i\3\a\f\8\y\o\n\p\6\w\r\w\u\x\c\x\x\h\a\c\8\7\a\f\k\j\7\3\f\i\5\5\9\g\v\k\4\n\c\d\o\w\w\g\v\8\9\l\r\y\4\k\6\i\o\r\s\6\i\x\1\9\5\q\i\h\2\7\8\p\p\6\0\i\d\1\t\x\m\g\5\g\z\c\2\i\9\c\j\x\q\c\y\4\d\7\e\o\i\y\5\e\s\l\a\l\o\2\8\q\d\w\b\s\a\e\8\o\q\j\k\v\s\f\c\w\d\c\z\t\u\4\q\d\q\l\y\6\g\q\c\s\l\a\6\y\z\2\u\3\4\q\z\8\d\n\0\y\n\c\u\y\f\f\l\o\p\w\u\w\v\p\0\5\f\j\6\r\4\y\z\z\5\b\s\d\o\w\p\2\2\k\2\1\3\j\i\3\t\q\0\w\l\n\8\s\b\q\c\o\m\8\b\1\a\c\b\c\3\5\k\s\2\o\n\l\v\i\4\l\m\7\0\0\m\c\w\0\w\f\9\x\g\i\7\r\b\8\a\y\k\z\g\9\u\l\t\l\a\i\8\m\7\3\q\e\n\o\9\i\4\q\h\d\2\e\a\d\m\7\j\u\7\t\1\g\d\j\c\d\p\a\8\y\k\n\z\i\9\w\g\h\2\x\b\l\2\a\3\w\d\u\0\z\t\r\7\d\v\v\b\y\j\o\v\7\p\b\r\n\b\k\1\4\m\w\x\h\n\d\z\o\h\7\h\f\g\c\g\h\f\d\w\r\g\w\d\p\n\9\g\b\r\a\b\f\x\3\b\u\4\m\r\l\o\9\h\4\i\s\2\1\1\g\m\9\q\z\n\x\o\w\q\s\9\r\v\b\j\1\3\1\d\v\e\2\i\2\u\5\9\u\j\n\0\g\w\o\5\q\f\r\z\a\5\u\i\d\3\m\s\y\m\9\e\o\l\z\l\z\x\d\9\c\o\q\q\j\d\r\m\h\8\x\u\a\s\n\v\7\8\k\b\6\y\9\q\l\i\q\y\q\h\1\i\t\d\w\h\q\6\0\3\2\s\7\8\l\y\j\9\w\t\f\k\r\a\2\2\e\4\w\z\7\2\v\4\l\a\4\1\p\q\c\6\7\u\x\m\5\j\a\m\h\z\z\5\h\k\o\j\c\h\7\m\z\0\i\d\q\9\j\a\1\y\m\f\y\8\m\4\8\i\2\l\a\e\r\r\r\v\a\h\9\a\g\f\b\0\i\c\b\z\g\i\p\4\f\w\q\n\g\f\1\3\6\4\0\5\h\9\w\m\q\8\o\5\i\u\n\q\7\a\6\n\i\m\9\e\r\l\n\l\w\r\z\w\7\1\9\h\8\0\m\e\3\u\6\q\4\q\4\q\p\1\g\i\b\c\g\o\9\k\h\1\2\z\y\p\5\4\q\8\6\x\p\b\u\4\l\d\4\b\r\e\2\6\e\3\y\n\i\9\v\l\0\t\i\l\5\k\2\w\9\o\3\2\j\g\m\c\c\5\6\v\p\a\1\4\6\j\b\j\9\4\n\z\l\w\c\3\e\o\w\n\u\f\x\c\x\o\z\3\f\l\q\6\r\6\w\b\r\a\q\p\c\j\4\h\l\5\l\1\g\w\n\c\8\8\p\3\p\v\n\i\v\b\a\q\g\u\z\a\w\1\a\w\o\7\8\x\3\4\l\e\i\p\o\1\m\n\s\a\y\2\g\7\n\b\3\k\2\e\s\6\d\z\w\s\q\z\y\p\m\r\x\y\m\v\v\p\s\k\c\j\g\i\e\y\e\k\k\9\w\m\h\c\9\m\m\c\o\r\9\g\2\l\u\9\z\d\p\o\s\o\8\4\b\y\2\p\0\p\4\5\8\l\b\e\w\z\h\i\g\b\u\8\x\o\m\1\w\0\a\y\l\6\t\c\x\e\8\a\i\d\4\u\l\t\n\o\r\0\f\a\z\d\9\o\9\0\w\l\v\0\a\v\3\r\5\n\1\p\n\x\z\q\v\4\y\8\4\d\k\s\t\f\q\q\9\c\b\s\g\y\7\l\t\w\6\0\h\5\x\x\7\m\w\w\e\g\v\6\l\g\5\3\u\3\9\e\g\y\2\c\6\9\i\z\1\1\4\9\x\l\h\x\p\j\i\l\f\a\n\s\0\k\f\e\4\5\c\g\n\s\u\e\y\c\c\j\p\l\k\b\z\e\w\j\o\b\t\w\q\5\l\i\q\5\5\b\b\x\9\o\m\w\2\9\3\r\5\6\k\q\r\o\n\d\6\y\y\t\0\6\r\u\3\k\5\i\z\n\7\p\k\f\5\e\s\z\q\j\m\p\y\o\2\0\x\m\1\g\1\i\d\w\k\l\b\9\2\k\k\9\9\x\7\1\e\0\h\8\l\0\4\4\0\d\2\f\z\0\s\q\h\l\g\v\a\4\j\d\9\d\4\i\6\y\n\w\t\z\d\p\s\e\c\6\1\5\c\r\x\j\y\1\b\6\v\j\f\w\t\t\v\i\p\4\r\y\j\a\b\1\m\o\p\9\f\o\6\8\j\x\2\3\3\4\f\t\2\s\b\r\f\w\o\o\k\y\3\o\x\j\w\m\d\5\a\h\q\9\a\8\m\t\6\o\o\z\k\t\6\3\e\k\w\z\t\n\l\o\m\d\i\e\5\i\c\c\w\h\e\f\1\v\j\p\l\t\1\0\1\m\4\n\f\z\b\c\9\v\e\1\r\k\z\q\a\t\c\z\o\8\v\5\i\x\l\z\a\f\r\v\n\v\4\4\2\o\9\3\l\j\8\2\s\3\7\o\j\y\3\y\5\q\n\v\h\a\y\t\f\7\v\l\w\v\f\u\r\u\w\2\f\f\h\h\6\w\i\1\s\1\l\i\9\n\8\e\j\r\u\0\3\o\2\1\2\w\d\k\o\5\t\c\6\t\m\7\0\k\u\0\r\2\q\4\4\6\0\i\3\2\z\9\l\k\j\o\b\9\7\3\q\m\o\u\5\w\i\3\v\q\0\8\w\n\t\x\9\6\h\1\h\i\j\2\x\s\h\e\n\b\0\c\v\c\5\o\q\y\s\1\x\2\y\k\4\a\b\6\9\m\i\4\k\1\t\m\j\w\z\q\p\g\t\g\w\1\s\f\h\r\w\5\h\b\g\1\3\7\t\i\k\4\9\w\p\w\l\2\2\5\c\c\9\5\b\h\0\j\1\b\y\a\2\h\m\g\4\q\i\z\d\h\4\i\x\f\6\o\u\m\a\u\k\s\t\0\z\a\p\w\q\3\w\2\8\d\8\1\4\f\d\8\u\9\4\0\v\m\9\s\r\u\x\j\u\n\j\t\2\m\w\t\k\i\1\n\i\a\r\8\e\3\d\o\1\7\o\4\f\s\w\3\t\6\3\g\o\s\8\h\4\h\1\u\m\n\s\b\w\0\9\b\p\0\w\d\d\n\9\a\r\h\x\h\e\v\x\b\v\f\o\v\e\i\o\d\z\5\f\9\c\3\d\c\7\3\d\t\7\h\n\a\p\5\2\9\t\h\j\m\l\3\d\t\1\w\t\r\2\3\r\9\x\h\q\n\5\u\9\m\y\5\k\0\t\3\h\7\o\f\p\a\y\9\g\k\7\s\l\o\7\6\v\j\s\q\d\m\s\7\l\g\n\j\t\i\y\d\a\0\u\z\x\g\t\5\x\c\d\m\k\z\1\3\0\e\3\n\3\d\f\h\q\o\d\u\6\c\2\c\b\l\a\r\q\v\7\l\t\g\y\d\n\c\a\d\h\1\j\q\v\0\i\r\u\r\h\f\6\v\m\f\d\b\g\u\8\o\2\3\p\o\d\8\c\u\2\0\h\s\s\f\n\s\t\9\k\g\m\1\6 ]] 00:06:56.954 ************************************ 00:06:56.954 END TEST dd_rw_offset 00:06:56.954 ************************************ 00:06:56.954 00:06:56.954 real 0m1.168s 00:06:56.954 user 0m0.817s 00:06:56.954 sys 0m0.236s 00:06:56.954 11:10:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.954 11:10:38 -- common/autotest_common.sh@10 -- # set +x 00:06:57.213 11:10:38 -- dd/basic_rw.sh@1 -- # cleanup 00:06:57.213 11:10:38 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:57.213 11:10:38 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:57.213 11:10:38 -- dd/common.sh@11 -- # local nvme_ref= 00:06:57.213 11:10:38 -- dd/common.sh@12 -- # local size=0xffff 00:06:57.213 11:10:38 -- dd/common.sh@14 -- # local bs=1048576 00:06:57.213 11:10:38 -- dd/common.sh@15 -- # local count=1 00:06:57.213 11:10:38 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:57.213 11:10:38 -- dd/common.sh@18 -- # gen_conf 00:06:57.213 11:10:38 -- dd/common.sh@31 -- # xtrace_disable 00:06:57.213 11:10:38 -- common/autotest_common.sh@10 -- # set +x 00:06:57.213 [2024-10-13 11:10:38.645649] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:57.213 [2024-10-13 11:10:38.646188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57968 ] 00:06:57.213 { 00:06:57.213 "subsystems": [ 00:06:57.213 { 00:06:57.213 "subsystem": "bdev", 00:06:57.213 "config": [ 00:06:57.213 { 00:06:57.213 "params": { 00:06:57.213 "trtype": "pcie", 00:06:57.213 "traddr": "0000:00:06.0", 00:06:57.213 "name": "Nvme0" 00:06:57.213 }, 00:06:57.213 "method": "bdev_nvme_attach_controller" 00:06:57.213 }, 00:06:57.213 { 00:06:57.213 "method": "bdev_wait_for_examine" 00:06:57.213 } 00:06:57.213 ] 00:06:57.213 } 00:06:57.213 ] 00:06:57.213 } 00:06:57.213 [2024-10-13 11:10:38.782884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.472 [2024-10-13 11:10:38.831351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.472  [2024-10-13T11:10:39.333Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:57.731 00:06:57.731 11:10:39 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.731 00:06:57.731 real 0m15.261s 00:06:57.731 user 0m11.139s 00:06:57.731 sys 0m2.661s 00:06:57.731 11:10:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.731 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:57.731 ************************************ 00:06:57.731 END TEST spdk_dd_basic_rw 00:06:57.731 ************************************ 00:06:57.731 11:10:39 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:57.731 11:10:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:57.731 11:10:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.731 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:57.731 ************************************ 00:06:57.731 START TEST spdk_dd_posix 00:06:57.731 ************************************ 00:06:57.731 11:10:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:57.731 * Looking for test storage... 00:06:57.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:57.732 11:10:39 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:57.732 11:10:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.732 11:10:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.732 11:10:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.732 11:10:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.732 11:10:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.732 11:10:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.732 11:10:39 -- paths/export.sh@5 -- # export PATH 00:06:57.732 11:10:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.732 11:10:39 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:57.732 11:10:39 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:57.732 11:10:39 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:57.732 11:10:39 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:57.732 11:10:39 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.732 11:10:39 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.732 11:10:39 -- dd/posix.sh@130 -- # tests 00:06:57.732 11:10:39 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:57.732 * First test run, liburing in use 00:06:57.732 11:10:39 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:57.732 11:10:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:57.732 11:10:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.732 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:57.732 ************************************ 00:06:57.732 START TEST dd_flag_append 00:06:57.732 ************************************ 00:06:57.732 11:10:39 -- common/autotest_common.sh@1104 -- # append 00:06:57.732 11:10:39 -- dd/posix.sh@16 -- # local dump0 00:06:57.732 11:10:39 -- dd/posix.sh@17 -- # local dump1 00:06:57.732 11:10:39 -- dd/posix.sh@19 -- # gen_bytes 32 00:06:57.732 11:10:39 -- dd/common.sh@98 -- # xtrace_disable 00:06:57.732 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:57.732 11:10:39 -- dd/posix.sh@19 -- # dump0=dklj1fn5eqq2nllv92wsp9cuesmtapnr 00:06:57.732 11:10:39 -- dd/posix.sh@20 -- # gen_bytes 32 00:06:57.732 11:10:39 -- dd/common.sh@98 -- # xtrace_disable 00:06:57.732 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:57.732 11:10:39 -- dd/posix.sh@20 -- # dump1=z9s3hmxr6hlw803ddxkh8bpn0hxxeg71 00:06:57.732 11:10:39 -- dd/posix.sh@22 -- # printf %s dklj1fn5eqq2nllv92wsp9cuesmtapnr 00:06:57.732 11:10:39 -- dd/posix.sh@23 -- # printf %s z9s3hmxr6hlw803ddxkh8bpn0hxxeg71 00:06:57.732 11:10:39 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:57.991 [2024-10-13 11:10:39.336569] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:57.991 [2024-10-13 11:10:39.336685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58026 ] 00:06:57.991 [2024-10-13 11:10:39.477382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.991 [2024-10-13 11:10:39.545953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.250  [2024-10-13T11:10:39.852Z] Copying: 32/32 [B] (average 31 kBps) 00:06:58.250 00:06:58.250 11:10:39 -- dd/posix.sh@27 -- # [[ z9s3hmxr6hlw803ddxkh8bpn0hxxeg71dklj1fn5eqq2nllv92wsp9cuesmtapnr == \z\9\s\3\h\m\x\r\6\h\l\w\8\0\3\d\d\x\k\h\8\b\p\n\0\h\x\x\e\g\7\1\d\k\l\j\1\f\n\5\e\q\q\2\n\l\l\v\9\2\w\s\p\9\c\u\e\s\m\t\a\p\n\r ]] 00:06:58.250 00:06:58.250 real 0m0.556s 00:06:58.250 user 0m0.330s 00:06:58.250 sys 0m0.104s 00:06:58.250 11:10:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.250 ************************************ 00:06:58.250 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:58.250 END TEST dd_flag_append 00:06:58.250 ************************************ 00:06:58.509 11:10:39 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:58.509 11:10:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.509 11:10:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.509 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:58.509 ************************************ 00:06:58.509 START TEST dd_flag_directory 00:06:58.509 ************************************ 00:06:58.509 11:10:39 -- common/autotest_common.sh@1104 -- # directory 00:06:58.509 11:10:39 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:58.509 11:10:39 -- common/autotest_common.sh@640 -- # local es=0 00:06:58.509 11:10:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:58.509 11:10:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.509 11:10:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.509 11:10:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.509 11:10:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.509 11:10:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.509 11:10:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.509 11:10:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.509 11:10:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.509 11:10:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:58.509 [2024-10-13 11:10:39.931002] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:58.509 [2024-10-13 11:10:39.931118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58052 ] 00:06:58.510 [2024-10-13 11:10:40.064431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.769 [2024-10-13 11:10:40.135902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.769 [2024-10-13 11:10:40.191925] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:58.769 [2024-10-13 11:10:40.192009] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:58.769 [2024-10-13 11:10:40.192034] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.769 [2024-10-13 11:10:40.265776] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:59.028 11:10:40 -- common/autotest_common.sh@643 -- # es=236 00:06:59.028 11:10:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:59.028 11:10:40 -- common/autotest_common.sh@652 -- # es=108 00:06:59.028 11:10:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:59.028 11:10:40 -- common/autotest_common.sh@660 -- # es=1 00:06:59.028 11:10:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:59.028 11:10:40 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:59.028 11:10:40 -- common/autotest_common.sh@640 -- # local es=0 00:06:59.028 11:10:40 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:59.028 11:10:40 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.029 11:10:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.029 11:10:40 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.029 11:10:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.029 11:10:40 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.029 11:10:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.029 11:10:40 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.029 11:10:40 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:59.029 11:10:40 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:59.029 [2024-10-13 11:10:40.416367] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:59.029 [2024-10-13 11:10:40.416876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58062 ] 00:06:59.029 [2024-10-13 11:10:40.544502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.029 [2024-10-13 11:10:40.593637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.288 [2024-10-13 11:10:40.638789] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:59.288 [2024-10-13 11:10:40.638867] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:59.288 [2024-10-13 11:10:40.638882] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.288 [2024-10-13 11:10:40.707984] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:59.288 11:10:40 -- common/autotest_common.sh@643 -- # es=236 00:06:59.288 11:10:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:59.288 11:10:40 -- common/autotest_common.sh@652 -- # es=108 00:06:59.288 11:10:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:59.288 11:10:40 -- common/autotest_common.sh@660 -- # es=1 00:06:59.288 11:10:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:59.288 00:06:59.288 real 0m0.919s 00:06:59.288 user 0m0.527s 00:06:59.288 sys 0m0.183s 00:06:59.288 11:10:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.288 ************************************ 00:06:59.288 END TEST dd_flag_directory 00:06:59.288 ************************************ 00:06:59.288 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:06:59.288 11:10:40 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:59.288 11:10:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:59.288 11:10:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.288 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:06:59.288 ************************************ 00:06:59.288 START TEST dd_flag_nofollow 00:06:59.288 ************************************ 00:06:59.288 11:10:40 -- common/autotest_common.sh@1104 -- # nofollow 00:06:59.288 11:10:40 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:59.288 11:10:40 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:59.288 11:10:40 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:59.288 11:10:40 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:59.288 11:10:40 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.288 11:10:40 -- common/autotest_common.sh@640 -- # local es=0 00:06:59.288 11:10:40 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.288 11:10:40 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.288 11:10:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.288 11:10:40 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.288 11:10:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.288 11:10:40 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.288 11:10:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.288 11:10:40 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.288 11:10:40 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:59.288 11:10:40 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.548 [2024-10-13 11:10:40.909630] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:59.548 [2024-10-13 11:10:40.909716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58090 ] 00:06:59.548 [2024-10-13 11:10:41.039924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.548 [2024-10-13 11:10:41.088505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.548 [2024-10-13 11:10:41.132699] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:59.548 [2024-10-13 11:10:41.132786] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:59.548 [2024-10-13 11:10:41.132799] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.807 [2024-10-13 11:10:41.194489] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:59.807 11:10:41 -- common/autotest_common.sh@643 -- # es=216 00:06:59.807 11:10:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:59.807 11:10:41 -- common/autotest_common.sh@652 -- # es=88 00:06:59.807 11:10:41 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:59.807 11:10:41 -- common/autotest_common.sh@660 -- # es=1 00:06:59.807 11:10:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:59.807 11:10:41 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:59.807 11:10:41 -- common/autotest_common.sh@640 -- # local es=0 00:06:59.807 11:10:41 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:59.807 11:10:41 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.807 11:10:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.807 11:10:41 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.807 11:10:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.807 11:10:41 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.807 11:10:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.807 11:10:41 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.807 11:10:41 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:59.807 11:10:41 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:59.807 [2024-10-13 11:10:41.349306] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:59.807 [2024-10-13 11:10:41.349418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58100 ] 00:07:00.066 [2024-10-13 11:10:41.482154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.066 [2024-10-13 11:10:41.531543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.066 [2024-10-13 11:10:41.576280] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:00.066 [2024-10-13 11:10:41.576390] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:00.066 [2024-10-13 11:10:41.576422] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.066 [2024-10-13 11:10:41.642067] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:00.325 11:10:41 -- common/autotest_common.sh@643 -- # es=216 00:07:00.325 11:10:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:00.325 11:10:41 -- common/autotest_common.sh@652 -- # es=88 00:07:00.325 11:10:41 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:00.325 11:10:41 -- common/autotest_common.sh@660 -- # es=1 00:07:00.325 11:10:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:00.325 11:10:41 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:00.325 11:10:41 -- dd/common.sh@98 -- # xtrace_disable 00:07:00.325 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:00.325 11:10:41 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.325 [2024-10-13 11:10:41.799657] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:00.325 [2024-10-13 11:10:41.799789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58106 ] 00:07:00.325 [2024-10-13 11:10:41.923933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.586 [2024-10-13 11:10:41.978488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.586  [2024-10-13T11:10:42.447Z] Copying: 512/512 [B] (average 500 kBps) 00:07:00.845 00:07:00.845 11:10:42 -- dd/posix.sh@49 -- # [[ 6hvur9xh4j4k0c3d26onvxsmt9nk06a0de6qv3lwg6soe5vf36j8kdmf88ykzn72ie1h5bffkxzmlqqiqscd5vxmanxg4t0r4p6kg9009m8yxdg31yxnx28i8d23zbsu06hd6byb0vdn8ydor3qteuk73e6cgvos4xm0hmvalb3z4xhjladukj1ccz4w00jt1srf6ig41byhu0gbtnn6x3afw3h95x92m3sgmtv26a31bcoy4mbbrnr7vkf7t0cx6mkm8pnli89jh19x8qjvt70yluwefaiqzqjxrec7er61eiv23n0rk1050pjle0yicvbfpa7kup7v50b43gio9f6drgrcxlfe1d74nqtcdfhj3bcpl9ue59tnt2azwgc768mxnrjb9nbxd7phq4tcrrqu4zc72ilv8plcllib27yvszmebvyz2rkuydge3ksf84r6nv2r8ehkjcfae52qzdkno9yhbuva4dckw365a8s2d2oeak4nor16iquyd58r == \6\h\v\u\r\9\x\h\4\j\4\k\0\c\3\d\2\6\o\n\v\x\s\m\t\9\n\k\0\6\a\0\d\e\6\q\v\3\l\w\g\6\s\o\e\5\v\f\3\6\j\8\k\d\m\f\8\8\y\k\z\n\7\2\i\e\1\h\5\b\f\f\k\x\z\m\l\q\q\i\q\s\c\d\5\v\x\m\a\n\x\g\4\t\0\r\4\p\6\k\g\9\0\0\9\m\8\y\x\d\g\3\1\y\x\n\x\2\8\i\8\d\2\3\z\b\s\u\0\6\h\d\6\b\y\b\0\v\d\n\8\y\d\o\r\3\q\t\e\u\k\7\3\e\6\c\g\v\o\s\4\x\m\0\h\m\v\a\l\b\3\z\4\x\h\j\l\a\d\u\k\j\1\c\c\z\4\w\0\0\j\t\1\s\r\f\6\i\g\4\1\b\y\h\u\0\g\b\t\n\n\6\x\3\a\f\w\3\h\9\5\x\9\2\m\3\s\g\m\t\v\2\6\a\3\1\b\c\o\y\4\m\b\b\r\n\r\7\v\k\f\7\t\0\c\x\6\m\k\m\8\p\n\l\i\8\9\j\h\1\9\x\8\q\j\v\t\7\0\y\l\u\w\e\f\a\i\q\z\q\j\x\r\e\c\7\e\r\6\1\e\i\v\2\3\n\0\r\k\1\0\5\0\p\j\l\e\0\y\i\c\v\b\f\p\a\7\k\u\p\7\v\5\0\b\4\3\g\i\o\9\f\6\d\r\g\r\c\x\l\f\e\1\d\7\4\n\q\t\c\d\f\h\j\3\b\c\p\l\9\u\e\5\9\t\n\t\2\a\z\w\g\c\7\6\8\m\x\n\r\j\b\9\n\b\x\d\7\p\h\q\4\t\c\r\r\q\u\4\z\c\7\2\i\l\v\8\p\l\c\l\l\i\b\2\7\y\v\s\z\m\e\b\v\y\z\2\r\k\u\y\d\g\e\3\k\s\f\8\4\r\6\n\v\2\r\8\e\h\k\j\c\f\a\e\5\2\q\z\d\k\n\o\9\y\h\b\u\v\a\4\d\c\k\w\3\6\5\a\8\s\2\d\2\o\e\a\k\4\n\o\r\1\6\i\q\u\y\d\5\8\r ]] 00:07:00.845 00:07:00.845 real 0m1.363s 00:07:00.845 user 0m0.772s 00:07:00.845 sys 0m0.262s 00:07:00.845 11:10:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.845 11:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:00.845 ************************************ 00:07:00.845 END TEST dd_flag_nofollow 00:07:00.845 ************************************ 00:07:00.845 11:10:42 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:00.845 11:10:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:00.845 11:10:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.845 11:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:00.845 ************************************ 00:07:00.845 START TEST dd_flag_noatime 00:07:00.845 ************************************ 00:07:00.845 11:10:42 -- common/autotest_common.sh@1104 -- # noatime 00:07:00.845 11:10:42 -- dd/posix.sh@53 -- # local atime_if 00:07:00.845 11:10:42 -- dd/posix.sh@54 -- # local atime_of 00:07:00.845 11:10:42 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:00.845 11:10:42 -- dd/common.sh@98 -- # xtrace_disable 00:07:00.845 11:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:00.845 11:10:42 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.845 11:10:42 -- dd/posix.sh@60 -- # atime_if=1728817842 00:07:00.845 11:10:42 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.845 11:10:42 -- dd/posix.sh@61 -- # atime_of=1728817842 00:07:00.845 11:10:42 -- dd/posix.sh@66 -- # sleep 1 00:07:01.781 11:10:43 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.781 [2024-10-13 11:10:43.348749] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:01.781 [2024-10-13 11:10:43.348866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58144 ] 00:07:02.040 [2024-10-13 11:10:43.489408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.040 [2024-10-13 11:10:43.559618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.040  [2024-10-13T11:10:43.901Z] Copying: 512/512 [B] (average 500 kBps) 00:07:02.299 00:07:02.299 11:10:43 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:02.299 11:10:43 -- dd/posix.sh@69 -- # (( atime_if == 1728817842 )) 00:07:02.299 11:10:43 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.299 11:10:43 -- dd/posix.sh@70 -- # (( atime_of == 1728817842 )) 00:07:02.299 11:10:43 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.299 [2024-10-13 11:10:43.859652] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:02.299 [2024-10-13 11:10:43.859777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58154 ] 00:07:02.558 [2024-10-13 11:10:43.999354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.558 [2024-10-13 11:10:44.048973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.558  [2024-10-13T11:10:44.419Z] Copying: 512/512 [B] (average 500 kBps) 00:07:02.817 00:07:02.817 11:10:44 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:02.817 11:10:44 -- dd/posix.sh@73 -- # (( atime_if < 1728817844 )) 00:07:02.817 00:07:02.817 real 0m2.009s 00:07:02.817 user 0m0.559s 00:07:02.817 sys 0m0.210s 00:07:02.817 11:10:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.817 11:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:02.817 ************************************ 00:07:02.817 END TEST dd_flag_noatime 00:07:02.817 ************************************ 00:07:02.817 11:10:44 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:02.817 11:10:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:02.817 11:10:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.817 11:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:02.817 ************************************ 00:07:02.817 START TEST dd_flags_misc 00:07:02.817 ************************************ 00:07:02.817 11:10:44 -- common/autotest_common.sh@1104 -- # io 00:07:02.817 11:10:44 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:02.817 11:10:44 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:02.817 11:10:44 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:02.817 11:10:44 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:02.817 11:10:44 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:02.817 11:10:44 -- dd/common.sh@98 -- # xtrace_disable 00:07:02.817 11:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:02.817 11:10:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:02.817 11:10:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:02.817 [2024-10-13 11:10:44.378436] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:02.817 [2024-10-13 11:10:44.378537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58186 ] 00:07:03.078 [2024-10-13 11:10:44.506834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.078 [2024-10-13 11:10:44.559247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.078  [2024-10-13T11:10:44.939Z] Copying: 512/512 [B] (average 500 kBps) 00:07:03.337 00:07:03.337 11:10:44 -- dd/posix.sh@93 -- # [[ tqgd2os674adia0dke1qnldsql95mwchnfostgpun04iafx2esxygcircaxclejbtge4a5kblalztjdsv9xc4puye7s4dzdeq5d4ntu9nlbbqv0kcd91lf4bwjvta81ztq11gbq4wuw7afr0wjo1wj03y0gci9zwtqu11q01z57h1qhapd6xdflrguj3rqml1tx6hkrl9v7cj6p7l68oqtly73m5u8efumxsig23d70m4orzwq5ejmi5wsfgfdow4j9gyyjzgg1ekmtlybrsh05ae2jvsh10obs6zces1k9npqydywiv935hayu52h84q8jhrx88zzyulqpzt6vvo0m0vr7jahleupdm1w5updfgevbgy4fo1qlto5q2xbiliontqgyyye0cgnbjkz0i1v1bdutp7p4lxer9ihxivnxwzbmvpm3z40pfpv84mvncysvcjv194fbv15106x5cyl5t4cms6b644zdat29idmaibv7fz7rlggnhkaoco8nt == \t\q\g\d\2\o\s\6\7\4\a\d\i\a\0\d\k\e\1\q\n\l\d\s\q\l\9\5\m\w\c\h\n\f\o\s\t\g\p\u\n\0\4\i\a\f\x\2\e\s\x\y\g\c\i\r\c\a\x\c\l\e\j\b\t\g\e\4\a\5\k\b\l\a\l\z\t\j\d\s\v\9\x\c\4\p\u\y\e\7\s\4\d\z\d\e\q\5\d\4\n\t\u\9\n\l\b\b\q\v\0\k\c\d\9\1\l\f\4\b\w\j\v\t\a\8\1\z\t\q\1\1\g\b\q\4\w\u\w\7\a\f\r\0\w\j\o\1\w\j\0\3\y\0\g\c\i\9\z\w\t\q\u\1\1\q\0\1\z\5\7\h\1\q\h\a\p\d\6\x\d\f\l\r\g\u\j\3\r\q\m\l\1\t\x\6\h\k\r\l\9\v\7\c\j\6\p\7\l\6\8\o\q\t\l\y\7\3\m\5\u\8\e\f\u\m\x\s\i\g\2\3\d\7\0\m\4\o\r\z\w\q\5\e\j\m\i\5\w\s\f\g\f\d\o\w\4\j\9\g\y\y\j\z\g\g\1\e\k\m\t\l\y\b\r\s\h\0\5\a\e\2\j\v\s\h\1\0\o\b\s\6\z\c\e\s\1\k\9\n\p\q\y\d\y\w\i\v\9\3\5\h\a\y\u\5\2\h\8\4\q\8\j\h\r\x\8\8\z\z\y\u\l\q\p\z\t\6\v\v\o\0\m\0\v\r\7\j\a\h\l\e\u\p\d\m\1\w\5\u\p\d\f\g\e\v\b\g\y\4\f\o\1\q\l\t\o\5\q\2\x\b\i\l\i\o\n\t\q\g\y\y\y\e\0\c\g\n\b\j\k\z\0\i\1\v\1\b\d\u\t\p\7\p\4\l\x\e\r\9\i\h\x\i\v\n\x\w\z\b\m\v\p\m\3\z\4\0\p\f\p\v\8\4\m\v\n\c\y\s\v\c\j\v\1\9\4\f\b\v\1\5\1\0\6\x\5\c\y\l\5\t\4\c\m\s\6\b\6\4\4\z\d\a\t\2\9\i\d\m\a\i\b\v\7\f\z\7\r\l\g\g\n\h\k\a\o\c\o\8\n\t ]] 00:07:03.337 11:10:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:03.337 11:10:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:03.337 [2024-10-13 11:10:44.830111] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:03.337 [2024-10-13 11:10:44.830202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58188 ] 00:07:03.595 [2024-10-13 11:10:44.963493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.595 [2024-10-13 11:10:45.017147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.595  [2024-10-13T11:10:45.456Z] Copying: 512/512 [B] (average 500 kBps) 00:07:03.854 00:07:03.854 11:10:45 -- dd/posix.sh@93 -- # [[ tqgd2os674adia0dke1qnldsql95mwchnfostgpun04iafx2esxygcircaxclejbtge4a5kblalztjdsv9xc4puye7s4dzdeq5d4ntu9nlbbqv0kcd91lf4bwjvta81ztq11gbq4wuw7afr0wjo1wj03y0gci9zwtqu11q01z57h1qhapd6xdflrguj3rqml1tx6hkrl9v7cj6p7l68oqtly73m5u8efumxsig23d70m4orzwq5ejmi5wsfgfdow4j9gyyjzgg1ekmtlybrsh05ae2jvsh10obs6zces1k9npqydywiv935hayu52h84q8jhrx88zzyulqpzt6vvo0m0vr7jahleupdm1w5updfgevbgy4fo1qlto5q2xbiliontqgyyye0cgnbjkz0i1v1bdutp7p4lxer9ihxivnxwzbmvpm3z40pfpv84mvncysvcjv194fbv15106x5cyl5t4cms6b644zdat29idmaibv7fz7rlggnhkaoco8nt == \t\q\g\d\2\o\s\6\7\4\a\d\i\a\0\d\k\e\1\q\n\l\d\s\q\l\9\5\m\w\c\h\n\f\o\s\t\g\p\u\n\0\4\i\a\f\x\2\e\s\x\y\g\c\i\r\c\a\x\c\l\e\j\b\t\g\e\4\a\5\k\b\l\a\l\z\t\j\d\s\v\9\x\c\4\p\u\y\e\7\s\4\d\z\d\e\q\5\d\4\n\t\u\9\n\l\b\b\q\v\0\k\c\d\9\1\l\f\4\b\w\j\v\t\a\8\1\z\t\q\1\1\g\b\q\4\w\u\w\7\a\f\r\0\w\j\o\1\w\j\0\3\y\0\g\c\i\9\z\w\t\q\u\1\1\q\0\1\z\5\7\h\1\q\h\a\p\d\6\x\d\f\l\r\g\u\j\3\r\q\m\l\1\t\x\6\h\k\r\l\9\v\7\c\j\6\p\7\l\6\8\o\q\t\l\y\7\3\m\5\u\8\e\f\u\m\x\s\i\g\2\3\d\7\0\m\4\o\r\z\w\q\5\e\j\m\i\5\w\s\f\g\f\d\o\w\4\j\9\g\y\y\j\z\g\g\1\e\k\m\t\l\y\b\r\s\h\0\5\a\e\2\j\v\s\h\1\0\o\b\s\6\z\c\e\s\1\k\9\n\p\q\y\d\y\w\i\v\9\3\5\h\a\y\u\5\2\h\8\4\q\8\j\h\r\x\8\8\z\z\y\u\l\q\p\z\t\6\v\v\o\0\m\0\v\r\7\j\a\h\l\e\u\p\d\m\1\w\5\u\p\d\f\g\e\v\b\g\y\4\f\o\1\q\l\t\o\5\q\2\x\b\i\l\i\o\n\t\q\g\y\y\y\e\0\c\g\n\b\j\k\z\0\i\1\v\1\b\d\u\t\p\7\p\4\l\x\e\r\9\i\h\x\i\v\n\x\w\z\b\m\v\p\m\3\z\4\0\p\f\p\v\8\4\m\v\n\c\y\s\v\c\j\v\1\9\4\f\b\v\1\5\1\0\6\x\5\c\y\l\5\t\4\c\m\s\6\b\6\4\4\z\d\a\t\2\9\i\d\m\a\i\b\v\7\f\z\7\r\l\g\g\n\h\k\a\o\c\o\8\n\t ]] 00:07:03.854 11:10:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:03.854 11:10:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:03.854 [2024-10-13 11:10:45.306822] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:03.854 [2024-10-13 11:10:45.306973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58195 ] 00:07:03.854 [2024-10-13 11:10:45.444269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.113 [2024-10-13 11:10:45.502553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.113  [2024-10-13T11:10:45.974Z] Copying: 512/512 [B] (average 250 kBps) 00:07:04.372 00:07:04.372 11:10:45 -- dd/posix.sh@93 -- # [[ tqgd2os674adia0dke1qnldsql95mwchnfostgpun04iafx2esxygcircaxclejbtge4a5kblalztjdsv9xc4puye7s4dzdeq5d4ntu9nlbbqv0kcd91lf4bwjvta81ztq11gbq4wuw7afr0wjo1wj03y0gci9zwtqu11q01z57h1qhapd6xdflrguj3rqml1tx6hkrl9v7cj6p7l68oqtly73m5u8efumxsig23d70m4orzwq5ejmi5wsfgfdow4j9gyyjzgg1ekmtlybrsh05ae2jvsh10obs6zces1k9npqydywiv935hayu52h84q8jhrx88zzyulqpzt6vvo0m0vr7jahleupdm1w5updfgevbgy4fo1qlto5q2xbiliontqgyyye0cgnbjkz0i1v1bdutp7p4lxer9ihxivnxwzbmvpm3z40pfpv84mvncysvcjv194fbv15106x5cyl5t4cms6b644zdat29idmaibv7fz7rlggnhkaoco8nt == \t\q\g\d\2\o\s\6\7\4\a\d\i\a\0\d\k\e\1\q\n\l\d\s\q\l\9\5\m\w\c\h\n\f\o\s\t\g\p\u\n\0\4\i\a\f\x\2\e\s\x\y\g\c\i\r\c\a\x\c\l\e\j\b\t\g\e\4\a\5\k\b\l\a\l\z\t\j\d\s\v\9\x\c\4\p\u\y\e\7\s\4\d\z\d\e\q\5\d\4\n\t\u\9\n\l\b\b\q\v\0\k\c\d\9\1\l\f\4\b\w\j\v\t\a\8\1\z\t\q\1\1\g\b\q\4\w\u\w\7\a\f\r\0\w\j\o\1\w\j\0\3\y\0\g\c\i\9\z\w\t\q\u\1\1\q\0\1\z\5\7\h\1\q\h\a\p\d\6\x\d\f\l\r\g\u\j\3\r\q\m\l\1\t\x\6\h\k\r\l\9\v\7\c\j\6\p\7\l\6\8\o\q\t\l\y\7\3\m\5\u\8\e\f\u\m\x\s\i\g\2\3\d\7\0\m\4\o\r\z\w\q\5\e\j\m\i\5\w\s\f\g\f\d\o\w\4\j\9\g\y\y\j\z\g\g\1\e\k\m\t\l\y\b\r\s\h\0\5\a\e\2\j\v\s\h\1\0\o\b\s\6\z\c\e\s\1\k\9\n\p\q\y\d\y\w\i\v\9\3\5\h\a\y\u\5\2\h\8\4\q\8\j\h\r\x\8\8\z\z\y\u\l\q\p\z\t\6\v\v\o\0\m\0\v\r\7\j\a\h\l\e\u\p\d\m\1\w\5\u\p\d\f\g\e\v\b\g\y\4\f\o\1\q\l\t\o\5\q\2\x\b\i\l\i\o\n\t\q\g\y\y\y\e\0\c\g\n\b\j\k\z\0\i\1\v\1\b\d\u\t\p\7\p\4\l\x\e\r\9\i\h\x\i\v\n\x\w\z\b\m\v\p\m\3\z\4\0\p\f\p\v\8\4\m\v\n\c\y\s\v\c\j\v\1\9\4\f\b\v\1\5\1\0\6\x\5\c\y\l\5\t\4\c\m\s\6\b\6\4\4\z\d\a\t\2\9\i\d\m\a\i\b\v\7\f\z\7\r\l\g\g\n\h\k\a\o\c\o\8\n\t ]] 00:07:04.372 11:10:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:04.372 11:10:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:04.372 [2024-10-13 11:10:45.817474] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:04.372 [2024-10-13 11:10:45.817576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58203 ] 00:07:04.372 [2024-10-13 11:10:45.956162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.632 [2024-10-13 11:10:46.029352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.632  [2024-10-13T11:10:46.493Z] Copying: 512/512 [B] (average 500 kBps) 00:07:04.891 00:07:04.891 11:10:46 -- dd/posix.sh@93 -- # [[ tqgd2os674adia0dke1qnldsql95mwchnfostgpun04iafx2esxygcircaxclejbtge4a5kblalztjdsv9xc4puye7s4dzdeq5d4ntu9nlbbqv0kcd91lf4bwjvta81ztq11gbq4wuw7afr0wjo1wj03y0gci9zwtqu11q01z57h1qhapd6xdflrguj3rqml1tx6hkrl9v7cj6p7l68oqtly73m5u8efumxsig23d70m4orzwq5ejmi5wsfgfdow4j9gyyjzgg1ekmtlybrsh05ae2jvsh10obs6zces1k9npqydywiv935hayu52h84q8jhrx88zzyulqpzt6vvo0m0vr7jahleupdm1w5updfgevbgy4fo1qlto5q2xbiliontqgyyye0cgnbjkz0i1v1bdutp7p4lxer9ihxivnxwzbmvpm3z40pfpv84mvncysvcjv194fbv15106x5cyl5t4cms6b644zdat29idmaibv7fz7rlggnhkaoco8nt == \t\q\g\d\2\o\s\6\7\4\a\d\i\a\0\d\k\e\1\q\n\l\d\s\q\l\9\5\m\w\c\h\n\f\o\s\t\g\p\u\n\0\4\i\a\f\x\2\e\s\x\y\g\c\i\r\c\a\x\c\l\e\j\b\t\g\e\4\a\5\k\b\l\a\l\z\t\j\d\s\v\9\x\c\4\p\u\y\e\7\s\4\d\z\d\e\q\5\d\4\n\t\u\9\n\l\b\b\q\v\0\k\c\d\9\1\l\f\4\b\w\j\v\t\a\8\1\z\t\q\1\1\g\b\q\4\w\u\w\7\a\f\r\0\w\j\o\1\w\j\0\3\y\0\g\c\i\9\z\w\t\q\u\1\1\q\0\1\z\5\7\h\1\q\h\a\p\d\6\x\d\f\l\r\g\u\j\3\r\q\m\l\1\t\x\6\h\k\r\l\9\v\7\c\j\6\p\7\l\6\8\o\q\t\l\y\7\3\m\5\u\8\e\f\u\m\x\s\i\g\2\3\d\7\0\m\4\o\r\z\w\q\5\e\j\m\i\5\w\s\f\g\f\d\o\w\4\j\9\g\y\y\j\z\g\g\1\e\k\m\t\l\y\b\r\s\h\0\5\a\e\2\j\v\s\h\1\0\o\b\s\6\z\c\e\s\1\k\9\n\p\q\y\d\y\w\i\v\9\3\5\h\a\y\u\5\2\h\8\4\q\8\j\h\r\x\8\8\z\z\y\u\l\q\p\z\t\6\v\v\o\0\m\0\v\r\7\j\a\h\l\e\u\p\d\m\1\w\5\u\p\d\f\g\e\v\b\g\y\4\f\o\1\q\l\t\o\5\q\2\x\b\i\l\i\o\n\t\q\g\y\y\y\e\0\c\g\n\b\j\k\z\0\i\1\v\1\b\d\u\t\p\7\p\4\l\x\e\r\9\i\h\x\i\v\n\x\w\z\b\m\v\p\m\3\z\4\0\p\f\p\v\8\4\m\v\n\c\y\s\v\c\j\v\1\9\4\f\b\v\1\5\1\0\6\x\5\c\y\l\5\t\4\c\m\s\6\b\6\4\4\z\d\a\t\2\9\i\d\m\a\i\b\v\7\f\z\7\r\l\g\g\n\h\k\a\o\c\o\8\n\t ]] 00:07:04.891 11:10:46 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:04.891 11:10:46 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:04.891 11:10:46 -- dd/common.sh@98 -- # xtrace_disable 00:07:04.891 11:10:46 -- common/autotest_common.sh@10 -- # set +x 00:07:04.891 11:10:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:04.891 11:10:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:04.891 [2024-10-13 11:10:46.339383] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:04.891 [2024-10-13 11:10:46.339497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58210 ] 00:07:04.891 [2024-10-13 11:10:46.469807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.150 [2024-10-13 11:10:46.527816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.150  [2024-10-13T11:10:46.752Z] Copying: 512/512 [B] (average 500 kBps) 00:07:05.150 00:07:05.150 11:10:46 -- dd/posix.sh@93 -- # [[ 6lldrng15lqu1ltytlr46bmuolku3757oq5jd24bigdzdoj073eufsaacuythbwlmggrqoo35arhj024hyutf6l0cuuaf50alb6cer3eozsmk4pn07u55mqozneo8xx24f6tkwxsoql2f0n3zn870je27uvqbtoedx3yrth4ypf31lz2glyso9rrb2engff7qfnxf8jtswx2kc9y3rhx3qbbtvg946kbzvrxnagg231zl1jfqv8exd2c0ldoyml6toufhbx5a1tllvoy7pswfazo5zu0omgztbg9uuupg4zujzho172hmqm18t3uvthcp0ilqk8w0hg7qmva7uvd7pjduwv4z27gba9wdecou3bg01cafckwlotpzyuj1gv2swy0jls5d487a5k3ms1mqqq9sipes3ezby4s1z5elfpwe1odfn2omr21jzucg5wdmcznnk35w5e6ecvuprzby6ojqvm7zboanqg3g89cjp3f34aa67nh100uxjf4iflt == \6\l\l\d\r\n\g\1\5\l\q\u\1\l\t\y\t\l\r\4\6\b\m\u\o\l\k\u\3\7\5\7\o\q\5\j\d\2\4\b\i\g\d\z\d\o\j\0\7\3\e\u\f\s\a\a\c\u\y\t\h\b\w\l\m\g\g\r\q\o\o\3\5\a\r\h\j\0\2\4\h\y\u\t\f\6\l\0\c\u\u\a\f\5\0\a\l\b\6\c\e\r\3\e\o\z\s\m\k\4\p\n\0\7\u\5\5\m\q\o\z\n\e\o\8\x\x\2\4\f\6\t\k\w\x\s\o\q\l\2\f\0\n\3\z\n\8\7\0\j\e\2\7\u\v\q\b\t\o\e\d\x\3\y\r\t\h\4\y\p\f\3\1\l\z\2\g\l\y\s\o\9\r\r\b\2\e\n\g\f\f\7\q\f\n\x\f\8\j\t\s\w\x\2\k\c\9\y\3\r\h\x\3\q\b\b\t\v\g\9\4\6\k\b\z\v\r\x\n\a\g\g\2\3\1\z\l\1\j\f\q\v\8\e\x\d\2\c\0\l\d\o\y\m\l\6\t\o\u\f\h\b\x\5\a\1\t\l\l\v\o\y\7\p\s\w\f\a\z\o\5\z\u\0\o\m\g\z\t\b\g\9\u\u\u\p\g\4\z\u\j\z\h\o\1\7\2\h\m\q\m\1\8\t\3\u\v\t\h\c\p\0\i\l\q\k\8\w\0\h\g\7\q\m\v\a\7\u\v\d\7\p\j\d\u\w\v\4\z\2\7\g\b\a\9\w\d\e\c\o\u\3\b\g\0\1\c\a\f\c\k\w\l\o\t\p\z\y\u\j\1\g\v\2\s\w\y\0\j\l\s\5\d\4\8\7\a\5\k\3\m\s\1\m\q\q\q\9\s\i\p\e\s\3\e\z\b\y\4\s\1\z\5\e\l\f\p\w\e\1\o\d\f\n\2\o\m\r\2\1\j\z\u\c\g\5\w\d\m\c\z\n\n\k\3\5\w\5\e\6\e\c\v\u\p\r\z\b\y\6\o\j\q\v\m\7\z\b\o\a\n\q\g\3\g\8\9\c\j\p\3\f\3\4\a\a\6\7\n\h\1\0\0\u\x\j\f\4\i\f\l\t ]] 00:07:05.150 11:10:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:05.150 11:10:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:05.410 [2024-10-13 11:10:46.778591] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:05.410 [2024-10-13 11:10:46.778714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58218 ] 00:07:05.410 [2024-10-13 11:10:46.902016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.410 [2024-10-13 11:10:46.953768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.410  [2024-10-13T11:10:47.271Z] Copying: 512/512 [B] (average 500 kBps) 00:07:05.669 00:07:05.669 11:10:47 -- dd/posix.sh@93 -- # [[ 6lldrng15lqu1ltytlr46bmuolku3757oq5jd24bigdzdoj073eufsaacuythbwlmggrqoo35arhj024hyutf6l0cuuaf50alb6cer3eozsmk4pn07u55mqozneo8xx24f6tkwxsoql2f0n3zn870je27uvqbtoedx3yrth4ypf31lz2glyso9rrb2engff7qfnxf8jtswx2kc9y3rhx3qbbtvg946kbzvrxnagg231zl1jfqv8exd2c0ldoyml6toufhbx5a1tllvoy7pswfazo5zu0omgztbg9uuupg4zujzho172hmqm18t3uvthcp0ilqk8w0hg7qmva7uvd7pjduwv4z27gba9wdecou3bg01cafckwlotpzyuj1gv2swy0jls5d487a5k3ms1mqqq9sipes3ezby4s1z5elfpwe1odfn2omr21jzucg5wdmcznnk35w5e6ecvuprzby6ojqvm7zboanqg3g89cjp3f34aa67nh100uxjf4iflt == \6\l\l\d\r\n\g\1\5\l\q\u\1\l\t\y\t\l\r\4\6\b\m\u\o\l\k\u\3\7\5\7\o\q\5\j\d\2\4\b\i\g\d\z\d\o\j\0\7\3\e\u\f\s\a\a\c\u\y\t\h\b\w\l\m\g\g\r\q\o\o\3\5\a\r\h\j\0\2\4\h\y\u\t\f\6\l\0\c\u\u\a\f\5\0\a\l\b\6\c\e\r\3\e\o\z\s\m\k\4\p\n\0\7\u\5\5\m\q\o\z\n\e\o\8\x\x\2\4\f\6\t\k\w\x\s\o\q\l\2\f\0\n\3\z\n\8\7\0\j\e\2\7\u\v\q\b\t\o\e\d\x\3\y\r\t\h\4\y\p\f\3\1\l\z\2\g\l\y\s\o\9\r\r\b\2\e\n\g\f\f\7\q\f\n\x\f\8\j\t\s\w\x\2\k\c\9\y\3\r\h\x\3\q\b\b\t\v\g\9\4\6\k\b\z\v\r\x\n\a\g\g\2\3\1\z\l\1\j\f\q\v\8\e\x\d\2\c\0\l\d\o\y\m\l\6\t\o\u\f\h\b\x\5\a\1\t\l\l\v\o\y\7\p\s\w\f\a\z\o\5\z\u\0\o\m\g\z\t\b\g\9\u\u\u\p\g\4\z\u\j\z\h\o\1\7\2\h\m\q\m\1\8\t\3\u\v\t\h\c\p\0\i\l\q\k\8\w\0\h\g\7\q\m\v\a\7\u\v\d\7\p\j\d\u\w\v\4\z\2\7\g\b\a\9\w\d\e\c\o\u\3\b\g\0\1\c\a\f\c\k\w\l\o\t\p\z\y\u\j\1\g\v\2\s\w\y\0\j\l\s\5\d\4\8\7\a\5\k\3\m\s\1\m\q\q\q\9\s\i\p\e\s\3\e\z\b\y\4\s\1\z\5\e\l\f\p\w\e\1\o\d\f\n\2\o\m\r\2\1\j\z\u\c\g\5\w\d\m\c\z\n\n\k\3\5\w\5\e\6\e\c\v\u\p\r\z\b\y\6\o\j\q\v\m\7\z\b\o\a\n\q\g\3\g\8\9\c\j\p\3\f\3\4\a\a\6\7\n\h\1\0\0\u\x\j\f\4\i\f\l\t ]] 00:07:05.669 11:10:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:05.669 11:10:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:05.669 [2024-10-13 11:10:47.230798] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:05.669 [2024-10-13 11:10:47.230890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58225 ] 00:07:05.927 [2024-10-13 11:10:47.358123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.927 [2024-10-13 11:10:47.414224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.927  [2024-10-13T11:10:47.788Z] Copying: 512/512 [B] (average 500 kBps) 00:07:06.186 00:07:06.186 11:10:47 -- dd/posix.sh@93 -- # [[ 6lldrng15lqu1ltytlr46bmuolku3757oq5jd24bigdzdoj073eufsaacuythbwlmggrqoo35arhj024hyutf6l0cuuaf50alb6cer3eozsmk4pn07u55mqozneo8xx24f6tkwxsoql2f0n3zn870je27uvqbtoedx3yrth4ypf31lz2glyso9rrb2engff7qfnxf8jtswx2kc9y3rhx3qbbtvg946kbzvrxnagg231zl1jfqv8exd2c0ldoyml6toufhbx5a1tllvoy7pswfazo5zu0omgztbg9uuupg4zujzho172hmqm18t3uvthcp0ilqk8w0hg7qmva7uvd7pjduwv4z27gba9wdecou3bg01cafckwlotpzyuj1gv2swy0jls5d487a5k3ms1mqqq9sipes3ezby4s1z5elfpwe1odfn2omr21jzucg5wdmcznnk35w5e6ecvuprzby6ojqvm7zboanqg3g89cjp3f34aa67nh100uxjf4iflt == \6\l\l\d\r\n\g\1\5\l\q\u\1\l\t\y\t\l\r\4\6\b\m\u\o\l\k\u\3\7\5\7\o\q\5\j\d\2\4\b\i\g\d\z\d\o\j\0\7\3\e\u\f\s\a\a\c\u\y\t\h\b\w\l\m\g\g\r\q\o\o\3\5\a\r\h\j\0\2\4\h\y\u\t\f\6\l\0\c\u\u\a\f\5\0\a\l\b\6\c\e\r\3\e\o\z\s\m\k\4\p\n\0\7\u\5\5\m\q\o\z\n\e\o\8\x\x\2\4\f\6\t\k\w\x\s\o\q\l\2\f\0\n\3\z\n\8\7\0\j\e\2\7\u\v\q\b\t\o\e\d\x\3\y\r\t\h\4\y\p\f\3\1\l\z\2\g\l\y\s\o\9\r\r\b\2\e\n\g\f\f\7\q\f\n\x\f\8\j\t\s\w\x\2\k\c\9\y\3\r\h\x\3\q\b\b\t\v\g\9\4\6\k\b\z\v\r\x\n\a\g\g\2\3\1\z\l\1\j\f\q\v\8\e\x\d\2\c\0\l\d\o\y\m\l\6\t\o\u\f\h\b\x\5\a\1\t\l\l\v\o\y\7\p\s\w\f\a\z\o\5\z\u\0\o\m\g\z\t\b\g\9\u\u\u\p\g\4\z\u\j\z\h\o\1\7\2\h\m\q\m\1\8\t\3\u\v\t\h\c\p\0\i\l\q\k\8\w\0\h\g\7\q\m\v\a\7\u\v\d\7\p\j\d\u\w\v\4\z\2\7\g\b\a\9\w\d\e\c\o\u\3\b\g\0\1\c\a\f\c\k\w\l\o\t\p\z\y\u\j\1\g\v\2\s\w\y\0\j\l\s\5\d\4\8\7\a\5\k\3\m\s\1\m\q\q\q\9\s\i\p\e\s\3\e\z\b\y\4\s\1\z\5\e\l\f\p\w\e\1\o\d\f\n\2\o\m\r\2\1\j\z\u\c\g\5\w\d\m\c\z\n\n\k\3\5\w\5\e\6\e\c\v\u\p\r\z\b\y\6\o\j\q\v\m\7\z\b\o\a\n\q\g\3\g\8\9\c\j\p\3\f\3\4\a\a\6\7\n\h\1\0\0\u\x\j\f\4\i\f\l\t ]] 00:07:06.186 11:10:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.186 11:10:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:06.186 [2024-10-13 11:10:47.680802] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:06.186 [2024-10-13 11:10:47.680883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58233 ] 00:07:06.445 [2024-10-13 11:10:47.811579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.445 [2024-10-13 11:10:47.858809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.445  [2024-10-13T11:10:48.305Z] Copying: 512/512 [B] (average 500 kBps) 00:07:06.703 00:07:06.703 11:10:48 -- dd/posix.sh@93 -- # [[ 6lldrng15lqu1ltytlr46bmuolku3757oq5jd24bigdzdoj073eufsaacuythbwlmggrqoo35arhj024hyutf6l0cuuaf50alb6cer3eozsmk4pn07u55mqozneo8xx24f6tkwxsoql2f0n3zn870je27uvqbtoedx3yrth4ypf31lz2glyso9rrb2engff7qfnxf8jtswx2kc9y3rhx3qbbtvg946kbzvrxnagg231zl1jfqv8exd2c0ldoyml6toufhbx5a1tllvoy7pswfazo5zu0omgztbg9uuupg4zujzho172hmqm18t3uvthcp0ilqk8w0hg7qmva7uvd7pjduwv4z27gba9wdecou3bg01cafckwlotpzyuj1gv2swy0jls5d487a5k3ms1mqqq9sipes3ezby4s1z5elfpwe1odfn2omr21jzucg5wdmcznnk35w5e6ecvuprzby6ojqvm7zboanqg3g89cjp3f34aa67nh100uxjf4iflt == \6\l\l\d\r\n\g\1\5\l\q\u\1\l\t\y\t\l\r\4\6\b\m\u\o\l\k\u\3\7\5\7\o\q\5\j\d\2\4\b\i\g\d\z\d\o\j\0\7\3\e\u\f\s\a\a\c\u\y\t\h\b\w\l\m\g\g\r\q\o\o\3\5\a\r\h\j\0\2\4\h\y\u\t\f\6\l\0\c\u\u\a\f\5\0\a\l\b\6\c\e\r\3\e\o\z\s\m\k\4\p\n\0\7\u\5\5\m\q\o\z\n\e\o\8\x\x\2\4\f\6\t\k\w\x\s\o\q\l\2\f\0\n\3\z\n\8\7\0\j\e\2\7\u\v\q\b\t\o\e\d\x\3\y\r\t\h\4\y\p\f\3\1\l\z\2\g\l\y\s\o\9\r\r\b\2\e\n\g\f\f\7\q\f\n\x\f\8\j\t\s\w\x\2\k\c\9\y\3\r\h\x\3\q\b\b\t\v\g\9\4\6\k\b\z\v\r\x\n\a\g\g\2\3\1\z\l\1\j\f\q\v\8\e\x\d\2\c\0\l\d\o\y\m\l\6\t\o\u\f\h\b\x\5\a\1\t\l\l\v\o\y\7\p\s\w\f\a\z\o\5\z\u\0\o\m\g\z\t\b\g\9\u\u\u\p\g\4\z\u\j\z\h\o\1\7\2\h\m\q\m\1\8\t\3\u\v\t\h\c\p\0\i\l\q\k\8\w\0\h\g\7\q\m\v\a\7\u\v\d\7\p\j\d\u\w\v\4\z\2\7\g\b\a\9\w\d\e\c\o\u\3\b\g\0\1\c\a\f\c\k\w\l\o\t\p\z\y\u\j\1\g\v\2\s\w\y\0\j\l\s\5\d\4\8\7\a\5\k\3\m\s\1\m\q\q\q\9\s\i\p\e\s\3\e\z\b\y\4\s\1\z\5\e\l\f\p\w\e\1\o\d\f\n\2\o\m\r\2\1\j\z\u\c\g\5\w\d\m\c\z\n\n\k\3\5\w\5\e\6\e\c\v\u\p\r\z\b\y\6\o\j\q\v\m\7\z\b\o\a\n\q\g\3\g\8\9\c\j\p\3\f\3\4\a\a\6\7\n\h\1\0\0\u\x\j\f\4\i\f\l\t ]] 00:07:06.703 00:07:06.703 real 0m3.734s 00:07:06.703 user 0m2.061s 00:07:06.703 sys 0m0.691s 00:07:06.703 11:10:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.703 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.703 ************************************ 00:07:06.703 END TEST dd_flags_misc 00:07:06.703 ************************************ 00:07:06.703 11:10:48 -- dd/posix.sh@131 -- # tests_forced_aio 00:07:06.703 11:10:48 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:06.703 * Second test run, disabling liburing, forcing AIO 00:07:06.703 11:10:48 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:06.703 11:10:48 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:06.703 11:10:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:06.703 11:10:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.703 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.703 ************************************ 00:07:06.703 START TEST dd_flag_append_forced_aio 00:07:06.703 ************************************ 00:07:06.703 11:10:48 -- common/autotest_common.sh@1104 -- # append 00:07:06.703 11:10:48 -- dd/posix.sh@16 -- # local dump0 00:07:06.703 11:10:48 -- dd/posix.sh@17 -- # local dump1 00:07:06.703 11:10:48 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:06.704 11:10:48 -- dd/common.sh@98 -- # xtrace_disable 00:07:06.704 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.704 11:10:48 -- dd/posix.sh@19 -- # dump0=jus6erbywa7t1wnjbdujeu5d9knsnr0g 00:07:06.704 11:10:48 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:06.704 11:10:48 -- dd/common.sh@98 -- # xtrace_disable 00:07:06.704 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.704 11:10:48 -- dd/posix.sh@20 -- # dump1=j2tbbz0ztjphd7lh4boeppg2jcrma1hz 00:07:06.704 11:10:48 -- dd/posix.sh@22 -- # printf %s jus6erbywa7t1wnjbdujeu5d9knsnr0g 00:07:06.704 11:10:48 -- dd/posix.sh@23 -- # printf %s j2tbbz0ztjphd7lh4boeppg2jcrma1hz 00:07:06.704 11:10:48 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:06.704 [2024-10-13 11:10:48.179811] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:06.704 [2024-10-13 11:10:48.179900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58259 ] 00:07:06.962 [2024-10-13 11:10:48.313516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.962 [2024-10-13 11:10:48.366049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.962  [2024-10-13T11:10:48.823Z] Copying: 32/32 [B] (average 31 kBps) 00:07:07.221 00:07:07.221 11:10:48 -- dd/posix.sh@27 -- # [[ j2tbbz0ztjphd7lh4boeppg2jcrma1hzjus6erbywa7t1wnjbdujeu5d9knsnr0g == \j\2\t\b\b\z\0\z\t\j\p\h\d\7\l\h\4\b\o\e\p\p\g\2\j\c\r\m\a\1\h\z\j\u\s\6\e\r\b\y\w\a\7\t\1\w\n\j\b\d\u\j\e\u\5\d\9\k\n\s\n\r\0\g ]] 00:07:07.221 00:07:07.221 real 0m0.460s 00:07:07.221 user 0m0.240s 00:07:07.221 sys 0m0.100s 00:07:07.221 11:10:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.221 ************************************ 00:07:07.221 END TEST dd_flag_append_forced_aio 00:07:07.221 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:07.221 ************************************ 00:07:07.221 11:10:48 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:07.221 11:10:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:07.221 11:10:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.221 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:07.221 ************************************ 00:07:07.221 START TEST dd_flag_directory_forced_aio 00:07:07.221 ************************************ 00:07:07.221 11:10:48 -- common/autotest_common.sh@1104 -- # directory 00:07:07.221 11:10:48 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:07.221 11:10:48 -- common/autotest_common.sh@640 -- # local es=0 00:07:07.221 11:10:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:07.221 11:10:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.221 11:10:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.222 11:10:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.222 11:10:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.222 11:10:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.222 11:10:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.222 11:10:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.222 11:10:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.222 11:10:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:07.222 [2024-10-13 11:10:48.687647] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:07.222 [2024-10-13 11:10:48.687745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58286 ] 00:07:07.480 [2024-10-13 11:10:48.825253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.480 [2024-10-13 11:10:48.897820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.480 [2024-10-13 11:10:48.955504] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:07.480 [2024-10-13 11:10:48.955577] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:07.480 [2024-10-13 11:10:48.955606] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.480 [2024-10-13 11:10:49.022620] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:07.739 11:10:49 -- common/autotest_common.sh@643 -- # es=236 00:07:07.739 11:10:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:07.739 11:10:49 -- common/autotest_common.sh@652 -- # es=108 00:07:07.739 11:10:49 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:07.739 11:10:49 -- common/autotest_common.sh@660 -- # es=1 00:07:07.739 11:10:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:07.739 11:10:49 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:07.739 11:10:49 -- common/autotest_common.sh@640 -- # local es=0 00:07:07.739 11:10:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:07.739 11:10:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.739 11:10:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.739 11:10:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.739 11:10:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.739 11:10:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.739 11:10:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.739 11:10:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.739 11:10:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.739 11:10:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:07.739 [2024-10-13 11:10:49.189484] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:07.739 [2024-10-13 11:10:49.189582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58294 ] 00:07:07.739 [2024-10-13 11:10:49.327688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.999 [2024-10-13 11:10:49.382548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.999 [2024-10-13 11:10:49.428852] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:07.999 [2024-10-13 11:10:49.428897] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:07.999 [2024-10-13 11:10:49.428910] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.999 [2024-10-13 11:10:49.490010] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:07.999 11:10:49 -- common/autotest_common.sh@643 -- # es=236 00:07:07.999 11:10:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:07.999 11:10:49 -- common/autotest_common.sh@652 -- # es=108 00:07:07.999 11:10:49 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:07.999 11:10:49 -- common/autotest_common.sh@660 -- # es=1 00:07:07.999 11:10:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:07.999 00:07:07.999 real 0m0.958s 00:07:07.999 user 0m0.543s 00:07:07.999 sys 0m0.206s 00:07:07.999 11:10:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.999 11:10:49 -- common/autotest_common.sh@10 -- # set +x 00:07:07.999 ************************************ 00:07:07.999 END TEST dd_flag_directory_forced_aio 00:07:07.999 ************************************ 00:07:08.258 11:10:49 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:08.258 11:10:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:08.258 11:10:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.258 11:10:49 -- common/autotest_common.sh@10 -- # set +x 00:07:08.258 ************************************ 00:07:08.258 START TEST dd_flag_nofollow_forced_aio 00:07:08.258 ************************************ 00:07:08.258 11:10:49 -- common/autotest_common.sh@1104 -- # nofollow 00:07:08.258 11:10:49 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:08.258 11:10:49 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:08.258 11:10:49 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:08.258 11:10:49 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:08.258 11:10:49 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.258 11:10:49 -- common/autotest_common.sh@640 -- # local es=0 00:07:08.258 11:10:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.258 11:10:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.258 11:10:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:08.258 11:10:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.258 11:10:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:08.258 11:10:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.258 11:10:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:08.258 11:10:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.258 11:10:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:08.258 11:10:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.258 [2024-10-13 11:10:49.702739] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:08.258 [2024-10-13 11:10:49.702991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58324 ] 00:07:08.258 [2024-10-13 11:10:49.843548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.517 [2024-10-13 11:10:49.908059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.517 [2024-10-13 11:10:49.959575] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:08.517 [2024-10-13 11:10:49.959629] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:08.517 [2024-10-13 11:10:49.959644] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.517 [2024-10-13 11:10:50.025683] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:08.776 11:10:50 -- common/autotest_common.sh@643 -- # es=216 00:07:08.776 11:10:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:08.776 11:10:50 -- common/autotest_common.sh@652 -- # es=88 00:07:08.776 11:10:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:08.776 11:10:50 -- common/autotest_common.sh@660 -- # es=1 00:07:08.776 11:10:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:08.776 11:10:50 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:08.776 11:10:50 -- common/autotest_common.sh@640 -- # local es=0 00:07:08.776 11:10:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:08.776 11:10:50 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.776 11:10:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:08.776 11:10:50 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.776 11:10:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:08.776 11:10:50 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.776 11:10:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:08.776 11:10:50 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.776 11:10:50 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:08.776 11:10:50 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:08.776 [2024-10-13 11:10:50.185364] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:08.776 [2024-10-13 11:10:50.185452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58328 ] 00:07:08.776 [2024-10-13 11:10:50.324386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.036 [2024-10-13 11:10:50.384235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.036 [2024-10-13 11:10:50.436548] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:09.036 [2024-10-13 11:10:50.436619] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:09.036 [2024-10-13 11:10:50.436635] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.036 [2024-10-13 11:10:50.503980] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:09.036 11:10:50 -- common/autotest_common.sh@643 -- # es=216 00:07:09.036 11:10:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:09.036 11:10:50 -- common/autotest_common.sh@652 -- # es=88 00:07:09.036 11:10:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:09.036 11:10:50 -- common/autotest_common.sh@660 -- # es=1 00:07:09.036 11:10:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:09.036 11:10:50 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:09.036 11:10:50 -- dd/common.sh@98 -- # xtrace_disable 00:07:09.036 11:10:50 -- common/autotest_common.sh@10 -- # set +x 00:07:09.036 11:10:50 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.295 [2024-10-13 11:10:50.647395] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:09.295 [2024-10-13 11:10:50.647482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58341 ] 00:07:09.295 [2024-10-13 11:10:50.777958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.295 [2024-10-13 11:10:50.835615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.295  [2024-10-13T11:10:51.155Z] Copying: 512/512 [B] (average 500 kBps) 00:07:09.553 00:07:09.553 ************************************ 00:07:09.553 END TEST dd_flag_nofollow_forced_aio 00:07:09.553 ************************************ 00:07:09.553 11:10:51 -- dd/posix.sh@49 -- # [[ lsmtzma48aggsfdc7mw3y37ia00fvcvfkshfjy9v4wsqiniwe2c95s0ihzp1x2ybhrtjbthcyowyef0627aree4b1uuv0sfjada16b8tjwfzt62kn0788nogaw3iq9890ixqo20xoqiy8742rd0l3o2g0dy5cqy24ju2zbx6zivua6o9w4es3fhc87fyat5fwg6xrh21zqne08ba6be2kamfkqlzwtr5f9lj3e801kryedy7a17s9lxmgz3heflu8lb5kkwxb4udat3iziee8x0ofdpaekf3aw2ze2oajp67mu9h6nxcuulxs0dxwh96o0u5bl9ebvj2tkywpw556qunhmglv3flu9f40mwj11isxegy6jcmdej7wbbu3aumtdkgav8wdvqrek8r16tpcy72zalgja37c7xbinz2kxblva27nyo5bogoy9j6yyf1rns0j5snz0fan8nqjjy2xsqeg8hscz80eognswm9v5ye58b90cp4ltn32tz6st55 == \l\s\m\t\z\m\a\4\8\a\g\g\s\f\d\c\7\m\w\3\y\3\7\i\a\0\0\f\v\c\v\f\k\s\h\f\j\y\9\v\4\w\s\q\i\n\i\w\e\2\c\9\5\s\0\i\h\z\p\1\x\2\y\b\h\r\t\j\b\t\h\c\y\o\w\y\e\f\0\6\2\7\a\r\e\e\4\b\1\u\u\v\0\s\f\j\a\d\a\1\6\b\8\t\j\w\f\z\t\6\2\k\n\0\7\8\8\n\o\g\a\w\3\i\q\9\8\9\0\i\x\q\o\2\0\x\o\q\i\y\8\7\4\2\r\d\0\l\3\o\2\g\0\d\y\5\c\q\y\2\4\j\u\2\z\b\x\6\z\i\v\u\a\6\o\9\w\4\e\s\3\f\h\c\8\7\f\y\a\t\5\f\w\g\6\x\r\h\2\1\z\q\n\e\0\8\b\a\6\b\e\2\k\a\m\f\k\q\l\z\w\t\r\5\f\9\l\j\3\e\8\0\1\k\r\y\e\d\y\7\a\1\7\s\9\l\x\m\g\z\3\h\e\f\l\u\8\l\b\5\k\k\w\x\b\4\u\d\a\t\3\i\z\i\e\e\8\x\0\o\f\d\p\a\e\k\f\3\a\w\2\z\e\2\o\a\j\p\6\7\m\u\9\h\6\n\x\c\u\u\l\x\s\0\d\x\w\h\9\6\o\0\u\5\b\l\9\e\b\v\j\2\t\k\y\w\p\w\5\5\6\q\u\n\h\m\g\l\v\3\f\l\u\9\f\4\0\m\w\j\1\1\i\s\x\e\g\y\6\j\c\m\d\e\j\7\w\b\b\u\3\a\u\m\t\d\k\g\a\v\8\w\d\v\q\r\e\k\8\r\1\6\t\p\c\y\7\2\z\a\l\g\j\a\3\7\c\7\x\b\i\n\z\2\k\x\b\l\v\a\2\7\n\y\o\5\b\o\g\o\y\9\j\6\y\y\f\1\r\n\s\0\j\5\s\n\z\0\f\a\n\8\n\q\j\j\y\2\x\s\q\e\g\8\h\s\c\z\8\0\e\o\g\n\s\w\m\9\v\5\y\e\5\8\b\9\0\c\p\4\l\t\n\3\2\t\z\6\s\t\5\5 ]] 00:07:09.553 00:07:09.553 real 0m1.427s 00:07:09.553 user 0m0.805s 00:07:09.553 sys 0m0.292s 00:07:09.553 11:10:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.553 11:10:51 -- common/autotest_common.sh@10 -- # set +x 00:07:09.553 11:10:51 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:09.553 11:10:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:09.553 11:10:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.553 11:10:51 -- common/autotest_common.sh@10 -- # set +x 00:07:09.553 ************************************ 00:07:09.553 START TEST dd_flag_noatime_forced_aio 00:07:09.553 ************************************ 00:07:09.553 11:10:51 -- common/autotest_common.sh@1104 -- # noatime 00:07:09.553 11:10:51 -- dd/posix.sh@53 -- # local atime_if 00:07:09.553 11:10:51 -- dd/posix.sh@54 -- # local atime_of 00:07:09.553 11:10:51 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:09.553 11:10:51 -- dd/common.sh@98 -- # xtrace_disable 00:07:09.553 11:10:51 -- common/autotest_common.sh@10 -- # set +x 00:07:09.553 11:10:51 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:09.553 11:10:51 -- dd/posix.sh@60 -- # atime_if=1728817850 00:07:09.553 11:10:51 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.553 11:10:51 -- dd/posix.sh@61 -- # atime_of=1728817851 00:07:09.553 11:10:51 -- dd/posix.sh@66 -- # sleep 1 00:07:10.929 11:10:52 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.929 [2024-10-13 11:10:52.189000] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:10.929 [2024-10-13 11:10:52.189279] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58376 ] 00:07:10.929 [2024-10-13 11:10:52.317218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.929 [2024-10-13 11:10:52.376098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.929  [2024-10-13T11:10:52.789Z] Copying: 512/512 [B] (average 500 kBps) 00:07:11.187 00:07:11.187 11:10:52 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:11.187 11:10:52 -- dd/posix.sh@69 -- # (( atime_if == 1728817850 )) 00:07:11.187 11:10:52 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:11.187 11:10:52 -- dd/posix.sh@70 -- # (( atime_of == 1728817851 )) 00:07:11.187 11:10:52 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:11.187 [2024-10-13 11:10:52.676050] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:11.187 [2024-10-13 11:10:52.676143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58393 ] 00:07:11.446 [2024-10-13 11:10:52.811297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.446 [2024-10-13 11:10:52.862470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.446  [2024-10-13T11:10:53.307Z] Copying: 512/512 [B] (average 500 kBps) 00:07:11.705 00:07:11.705 11:10:53 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:11.705 ************************************ 00:07:11.705 END TEST dd_flag_noatime_forced_aio 00:07:11.705 ************************************ 00:07:11.705 11:10:53 -- dd/posix.sh@73 -- # (( atime_if < 1728817852 )) 00:07:11.705 00:07:11.705 real 0m1.960s 00:07:11.705 user 0m0.508s 00:07:11.705 sys 0m0.205s 00:07:11.705 11:10:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.705 11:10:53 -- common/autotest_common.sh@10 -- # set +x 00:07:11.705 11:10:53 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:11.705 11:10:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.705 11:10:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.705 11:10:53 -- common/autotest_common.sh@10 -- # set +x 00:07:11.705 ************************************ 00:07:11.705 START TEST dd_flags_misc_forced_aio 00:07:11.705 ************************************ 00:07:11.705 11:10:53 -- common/autotest_common.sh@1104 -- # io 00:07:11.705 11:10:53 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:11.705 11:10:53 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:11.705 11:10:53 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:11.705 11:10:53 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:11.705 11:10:53 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:11.705 11:10:53 -- dd/common.sh@98 -- # xtrace_disable 00:07:11.705 11:10:53 -- common/autotest_common.sh@10 -- # set +x 00:07:11.705 11:10:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:11.705 11:10:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:11.705 [2024-10-13 11:10:53.200973] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:11.705 [2024-10-13 11:10:53.201249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58414 ] 00:07:11.964 [2024-10-13 11:10:53.343367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.964 [2024-10-13 11:10:53.399993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.964  [2024-10-13T11:10:53.824Z] Copying: 512/512 [B] (average 500 kBps) 00:07:12.222 00:07:12.222 11:10:53 -- dd/posix.sh@93 -- # [[ 0wgaxk056z729p8968gmio157nw1etiu6djcf1g326m9vvr143cmf35xzb69fsufreg33ov0lg6cgkzzfuw4vgxjpu628f54ueijhva440ng2zv1wuabercd8f9rqgboaeak61pvlvgxo2d0yzllftif85f0ksvamk6ofo95fvw0lpmn6m3vinlfb3o1zo0f2ll9luujfw3dw32sfeppufegiubybwztbsboszeg8duhsj2exunal4wx13d0bpe81ib08rvngd2oa38cizqbscht75z26l0redb66hi3j3ae1v1veuvsf2rqo6aaq111yrzuhpd29sxto378g2j6iw216pc8hz271qd7mhpi1e9ihaws40dnpbluyfs0eu8sfo3n4y0jocsfok7rmr78hliwkl3vtejt7vaev5ryfwpjuxpgp4eevvizsbc540a146hygp0bnfp0vlkn9xlm7i4sptlm94qi9ipkzdr7v4zdsa5i8nu0gtydazn8pb0l == \0\w\g\a\x\k\0\5\6\z\7\2\9\p\8\9\6\8\g\m\i\o\1\5\7\n\w\1\e\t\i\u\6\d\j\c\f\1\g\3\2\6\m\9\v\v\r\1\4\3\c\m\f\3\5\x\z\b\6\9\f\s\u\f\r\e\g\3\3\o\v\0\l\g\6\c\g\k\z\z\f\u\w\4\v\g\x\j\p\u\6\2\8\f\5\4\u\e\i\j\h\v\a\4\4\0\n\g\2\z\v\1\w\u\a\b\e\r\c\d\8\f\9\r\q\g\b\o\a\e\a\k\6\1\p\v\l\v\g\x\o\2\d\0\y\z\l\l\f\t\i\f\8\5\f\0\k\s\v\a\m\k\6\o\f\o\9\5\f\v\w\0\l\p\m\n\6\m\3\v\i\n\l\f\b\3\o\1\z\o\0\f\2\l\l\9\l\u\u\j\f\w\3\d\w\3\2\s\f\e\p\p\u\f\e\g\i\u\b\y\b\w\z\t\b\s\b\o\s\z\e\g\8\d\u\h\s\j\2\e\x\u\n\a\l\4\w\x\1\3\d\0\b\p\e\8\1\i\b\0\8\r\v\n\g\d\2\o\a\3\8\c\i\z\q\b\s\c\h\t\7\5\z\2\6\l\0\r\e\d\b\6\6\h\i\3\j\3\a\e\1\v\1\v\e\u\v\s\f\2\r\q\o\6\a\a\q\1\1\1\y\r\z\u\h\p\d\2\9\s\x\t\o\3\7\8\g\2\j\6\i\w\2\1\6\p\c\8\h\z\2\7\1\q\d\7\m\h\p\i\1\e\9\i\h\a\w\s\4\0\d\n\p\b\l\u\y\f\s\0\e\u\8\s\f\o\3\n\4\y\0\j\o\c\s\f\o\k\7\r\m\r\7\8\h\l\i\w\k\l\3\v\t\e\j\t\7\v\a\e\v\5\r\y\f\w\p\j\u\x\p\g\p\4\e\e\v\v\i\z\s\b\c\5\4\0\a\1\4\6\h\y\g\p\0\b\n\f\p\0\v\l\k\n\9\x\l\m\7\i\4\s\p\t\l\m\9\4\q\i\9\i\p\k\z\d\r\7\v\4\z\d\s\a\5\i\8\n\u\0\g\t\y\d\a\z\n\8\p\b\0\l ]] 00:07:12.222 11:10:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.222 11:10:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:12.222 [2024-10-13 11:10:53.685094] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:12.222 [2024-10-13 11:10:53.685187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58427 ] 00:07:12.222 [2024-10-13 11:10:53.820229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.481 [2024-10-13 11:10:53.868718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.482  [2024-10-13T11:10:54.342Z] Copying: 512/512 [B] (average 500 kBps) 00:07:12.740 00:07:12.740 11:10:54 -- dd/posix.sh@93 -- # [[ 0wgaxk056z729p8968gmio157nw1etiu6djcf1g326m9vvr143cmf35xzb69fsufreg33ov0lg6cgkzzfuw4vgxjpu628f54ueijhva440ng2zv1wuabercd8f9rqgboaeak61pvlvgxo2d0yzllftif85f0ksvamk6ofo95fvw0lpmn6m3vinlfb3o1zo0f2ll9luujfw3dw32sfeppufegiubybwztbsboszeg8duhsj2exunal4wx13d0bpe81ib08rvngd2oa38cizqbscht75z26l0redb66hi3j3ae1v1veuvsf2rqo6aaq111yrzuhpd29sxto378g2j6iw216pc8hz271qd7mhpi1e9ihaws40dnpbluyfs0eu8sfo3n4y0jocsfok7rmr78hliwkl3vtejt7vaev5ryfwpjuxpgp4eevvizsbc540a146hygp0bnfp0vlkn9xlm7i4sptlm94qi9ipkzdr7v4zdsa5i8nu0gtydazn8pb0l == \0\w\g\a\x\k\0\5\6\z\7\2\9\p\8\9\6\8\g\m\i\o\1\5\7\n\w\1\e\t\i\u\6\d\j\c\f\1\g\3\2\6\m\9\v\v\r\1\4\3\c\m\f\3\5\x\z\b\6\9\f\s\u\f\r\e\g\3\3\o\v\0\l\g\6\c\g\k\z\z\f\u\w\4\v\g\x\j\p\u\6\2\8\f\5\4\u\e\i\j\h\v\a\4\4\0\n\g\2\z\v\1\w\u\a\b\e\r\c\d\8\f\9\r\q\g\b\o\a\e\a\k\6\1\p\v\l\v\g\x\o\2\d\0\y\z\l\l\f\t\i\f\8\5\f\0\k\s\v\a\m\k\6\o\f\o\9\5\f\v\w\0\l\p\m\n\6\m\3\v\i\n\l\f\b\3\o\1\z\o\0\f\2\l\l\9\l\u\u\j\f\w\3\d\w\3\2\s\f\e\p\p\u\f\e\g\i\u\b\y\b\w\z\t\b\s\b\o\s\z\e\g\8\d\u\h\s\j\2\e\x\u\n\a\l\4\w\x\1\3\d\0\b\p\e\8\1\i\b\0\8\r\v\n\g\d\2\o\a\3\8\c\i\z\q\b\s\c\h\t\7\5\z\2\6\l\0\r\e\d\b\6\6\h\i\3\j\3\a\e\1\v\1\v\e\u\v\s\f\2\r\q\o\6\a\a\q\1\1\1\y\r\z\u\h\p\d\2\9\s\x\t\o\3\7\8\g\2\j\6\i\w\2\1\6\p\c\8\h\z\2\7\1\q\d\7\m\h\p\i\1\e\9\i\h\a\w\s\4\0\d\n\p\b\l\u\y\f\s\0\e\u\8\s\f\o\3\n\4\y\0\j\o\c\s\f\o\k\7\r\m\r\7\8\h\l\i\w\k\l\3\v\t\e\j\t\7\v\a\e\v\5\r\y\f\w\p\j\u\x\p\g\p\4\e\e\v\v\i\z\s\b\c\5\4\0\a\1\4\6\h\y\g\p\0\b\n\f\p\0\v\l\k\n\9\x\l\m\7\i\4\s\p\t\l\m\9\4\q\i\9\i\p\k\z\d\r\7\v\4\z\d\s\a\5\i\8\n\u\0\g\t\y\d\a\z\n\8\p\b\0\l ]] 00:07:12.740 11:10:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.740 11:10:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:12.740 [2024-10-13 11:10:54.142115] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:12.740 [2024-10-13 11:10:54.142221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58429 ] 00:07:12.740 [2024-10-13 11:10:54.276860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.740 [2024-10-13 11:10:54.324856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.000  [2024-10-13T11:10:54.602Z] Copying: 512/512 [B] (average 125 kBps) 00:07:13.000 00:07:13.000 11:10:54 -- dd/posix.sh@93 -- # [[ 0wgaxk056z729p8968gmio157nw1etiu6djcf1g326m9vvr143cmf35xzb69fsufreg33ov0lg6cgkzzfuw4vgxjpu628f54ueijhva440ng2zv1wuabercd8f9rqgboaeak61pvlvgxo2d0yzllftif85f0ksvamk6ofo95fvw0lpmn6m3vinlfb3o1zo0f2ll9luujfw3dw32sfeppufegiubybwztbsboszeg8duhsj2exunal4wx13d0bpe81ib08rvngd2oa38cizqbscht75z26l0redb66hi3j3ae1v1veuvsf2rqo6aaq111yrzuhpd29sxto378g2j6iw216pc8hz271qd7mhpi1e9ihaws40dnpbluyfs0eu8sfo3n4y0jocsfok7rmr78hliwkl3vtejt7vaev5ryfwpjuxpgp4eevvizsbc540a146hygp0bnfp0vlkn9xlm7i4sptlm94qi9ipkzdr7v4zdsa5i8nu0gtydazn8pb0l == \0\w\g\a\x\k\0\5\6\z\7\2\9\p\8\9\6\8\g\m\i\o\1\5\7\n\w\1\e\t\i\u\6\d\j\c\f\1\g\3\2\6\m\9\v\v\r\1\4\3\c\m\f\3\5\x\z\b\6\9\f\s\u\f\r\e\g\3\3\o\v\0\l\g\6\c\g\k\z\z\f\u\w\4\v\g\x\j\p\u\6\2\8\f\5\4\u\e\i\j\h\v\a\4\4\0\n\g\2\z\v\1\w\u\a\b\e\r\c\d\8\f\9\r\q\g\b\o\a\e\a\k\6\1\p\v\l\v\g\x\o\2\d\0\y\z\l\l\f\t\i\f\8\5\f\0\k\s\v\a\m\k\6\o\f\o\9\5\f\v\w\0\l\p\m\n\6\m\3\v\i\n\l\f\b\3\o\1\z\o\0\f\2\l\l\9\l\u\u\j\f\w\3\d\w\3\2\s\f\e\p\p\u\f\e\g\i\u\b\y\b\w\z\t\b\s\b\o\s\z\e\g\8\d\u\h\s\j\2\e\x\u\n\a\l\4\w\x\1\3\d\0\b\p\e\8\1\i\b\0\8\r\v\n\g\d\2\o\a\3\8\c\i\z\q\b\s\c\h\t\7\5\z\2\6\l\0\r\e\d\b\6\6\h\i\3\j\3\a\e\1\v\1\v\e\u\v\s\f\2\r\q\o\6\a\a\q\1\1\1\y\r\z\u\h\p\d\2\9\s\x\t\o\3\7\8\g\2\j\6\i\w\2\1\6\p\c\8\h\z\2\7\1\q\d\7\m\h\p\i\1\e\9\i\h\a\w\s\4\0\d\n\p\b\l\u\y\f\s\0\e\u\8\s\f\o\3\n\4\y\0\j\o\c\s\f\o\k\7\r\m\r\7\8\h\l\i\w\k\l\3\v\t\e\j\t\7\v\a\e\v\5\r\y\f\w\p\j\u\x\p\g\p\4\e\e\v\v\i\z\s\b\c\5\4\0\a\1\4\6\h\y\g\p\0\b\n\f\p\0\v\l\k\n\9\x\l\m\7\i\4\s\p\t\l\m\9\4\q\i\9\i\p\k\z\d\r\7\v\4\z\d\s\a\5\i\8\n\u\0\g\t\y\d\a\z\n\8\p\b\0\l ]] 00:07:13.000 11:10:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.000 11:10:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:13.259 [2024-10-13 11:10:54.617538] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:13.259 [2024-10-13 11:10:54.617641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58442 ] 00:07:13.259 [2024-10-13 11:10:54.754506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.259 [2024-10-13 11:10:54.805116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.259  [2024-10-13T11:10:55.120Z] Copying: 512/512 [B] (average 500 kBps) 00:07:13.518 00:07:13.518 11:10:55 -- dd/posix.sh@93 -- # [[ 0wgaxk056z729p8968gmio157nw1etiu6djcf1g326m9vvr143cmf35xzb69fsufreg33ov0lg6cgkzzfuw4vgxjpu628f54ueijhva440ng2zv1wuabercd8f9rqgboaeak61pvlvgxo2d0yzllftif85f0ksvamk6ofo95fvw0lpmn6m3vinlfb3o1zo0f2ll9luujfw3dw32sfeppufegiubybwztbsboszeg8duhsj2exunal4wx13d0bpe81ib08rvngd2oa38cizqbscht75z26l0redb66hi3j3ae1v1veuvsf2rqo6aaq111yrzuhpd29sxto378g2j6iw216pc8hz271qd7mhpi1e9ihaws40dnpbluyfs0eu8sfo3n4y0jocsfok7rmr78hliwkl3vtejt7vaev5ryfwpjuxpgp4eevvizsbc540a146hygp0bnfp0vlkn9xlm7i4sptlm94qi9ipkzdr7v4zdsa5i8nu0gtydazn8pb0l == \0\w\g\a\x\k\0\5\6\z\7\2\9\p\8\9\6\8\g\m\i\o\1\5\7\n\w\1\e\t\i\u\6\d\j\c\f\1\g\3\2\6\m\9\v\v\r\1\4\3\c\m\f\3\5\x\z\b\6\9\f\s\u\f\r\e\g\3\3\o\v\0\l\g\6\c\g\k\z\z\f\u\w\4\v\g\x\j\p\u\6\2\8\f\5\4\u\e\i\j\h\v\a\4\4\0\n\g\2\z\v\1\w\u\a\b\e\r\c\d\8\f\9\r\q\g\b\o\a\e\a\k\6\1\p\v\l\v\g\x\o\2\d\0\y\z\l\l\f\t\i\f\8\5\f\0\k\s\v\a\m\k\6\o\f\o\9\5\f\v\w\0\l\p\m\n\6\m\3\v\i\n\l\f\b\3\o\1\z\o\0\f\2\l\l\9\l\u\u\j\f\w\3\d\w\3\2\s\f\e\p\p\u\f\e\g\i\u\b\y\b\w\z\t\b\s\b\o\s\z\e\g\8\d\u\h\s\j\2\e\x\u\n\a\l\4\w\x\1\3\d\0\b\p\e\8\1\i\b\0\8\r\v\n\g\d\2\o\a\3\8\c\i\z\q\b\s\c\h\t\7\5\z\2\6\l\0\r\e\d\b\6\6\h\i\3\j\3\a\e\1\v\1\v\e\u\v\s\f\2\r\q\o\6\a\a\q\1\1\1\y\r\z\u\h\p\d\2\9\s\x\t\o\3\7\8\g\2\j\6\i\w\2\1\6\p\c\8\h\z\2\7\1\q\d\7\m\h\p\i\1\e\9\i\h\a\w\s\4\0\d\n\p\b\l\u\y\f\s\0\e\u\8\s\f\o\3\n\4\y\0\j\o\c\s\f\o\k\7\r\m\r\7\8\h\l\i\w\k\l\3\v\t\e\j\t\7\v\a\e\v\5\r\y\f\w\p\j\u\x\p\g\p\4\e\e\v\v\i\z\s\b\c\5\4\0\a\1\4\6\h\y\g\p\0\b\n\f\p\0\v\l\k\n\9\x\l\m\7\i\4\s\p\t\l\m\9\4\q\i\9\i\p\k\z\d\r\7\v\4\z\d\s\a\5\i\8\n\u\0\g\t\y\d\a\z\n\8\p\b\0\l ]] 00:07:13.518 11:10:55 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:13.518 11:10:55 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:13.518 11:10:55 -- dd/common.sh@98 -- # xtrace_disable 00:07:13.518 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:07:13.518 11:10:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.518 11:10:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:13.518 [2024-10-13 11:10:55.090421] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:13.518 [2024-10-13 11:10:55.090544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58444 ] 00:07:13.777 [2024-10-13 11:10:55.226315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.777 [2024-10-13 11:10:55.276873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.777  [2024-10-13T11:10:55.638Z] Copying: 512/512 [B] (average 500 kBps) 00:07:14.036 00:07:14.036 11:10:55 -- dd/posix.sh@93 -- # [[ q5h8cbovap3vxwtu7nccglndd6l9jsvs63oiynal2ia2fpcw03lvn6knr7gi3c7v2m5ybagvr0ko01qu19rw9wn9bqsog7hfur8a6plfb1fp33yavsggruy68krm3auxt6obturabq4k2mic35fz1324o201c1zil52hnfloa8nzds6uf7l751bn1tz3qxbura4z2r4t2lv0tq92krppe1i1bfva26rk231zhsudmbfebomabs3i5zczbu2qvep7hht57r7xb6mlsyg34k9kko00lwvw6nrfy5ta2k3oaa18wdh9z8g4z6wrw6seb1hky5ak5ghmu2iwi22orxjdsdbpcalqykax0tx6n2gswz6z0g41amsafddnnrbfdnkm11nv7w8qqpw1i7mcu0tc1qxv4s8a1jqhhl9wbfq7ot89dwhbcmkzvxgrtu2qhtukxtd4dnpucoktys9nw2onrmpjuuvtpsn0w7ttvdui7rufe3cuaqv2g4mknhijabng == \q\5\h\8\c\b\o\v\a\p\3\v\x\w\t\u\7\n\c\c\g\l\n\d\d\6\l\9\j\s\v\s\6\3\o\i\y\n\a\l\2\i\a\2\f\p\c\w\0\3\l\v\n\6\k\n\r\7\g\i\3\c\7\v\2\m\5\y\b\a\g\v\r\0\k\o\0\1\q\u\1\9\r\w\9\w\n\9\b\q\s\o\g\7\h\f\u\r\8\a\6\p\l\f\b\1\f\p\3\3\y\a\v\s\g\g\r\u\y\6\8\k\r\m\3\a\u\x\t\6\o\b\t\u\r\a\b\q\4\k\2\m\i\c\3\5\f\z\1\3\2\4\o\2\0\1\c\1\z\i\l\5\2\h\n\f\l\o\a\8\n\z\d\s\6\u\f\7\l\7\5\1\b\n\1\t\z\3\q\x\b\u\r\a\4\z\2\r\4\t\2\l\v\0\t\q\9\2\k\r\p\p\e\1\i\1\b\f\v\a\2\6\r\k\2\3\1\z\h\s\u\d\m\b\f\e\b\o\m\a\b\s\3\i\5\z\c\z\b\u\2\q\v\e\p\7\h\h\t\5\7\r\7\x\b\6\m\l\s\y\g\3\4\k\9\k\k\o\0\0\l\w\v\w\6\n\r\f\y\5\t\a\2\k\3\o\a\a\1\8\w\d\h\9\z\8\g\4\z\6\w\r\w\6\s\e\b\1\h\k\y\5\a\k\5\g\h\m\u\2\i\w\i\2\2\o\r\x\j\d\s\d\b\p\c\a\l\q\y\k\a\x\0\t\x\6\n\2\g\s\w\z\6\z\0\g\4\1\a\m\s\a\f\d\d\n\n\r\b\f\d\n\k\m\1\1\n\v\7\w\8\q\q\p\w\1\i\7\m\c\u\0\t\c\1\q\x\v\4\s\8\a\1\j\q\h\h\l\9\w\b\f\q\7\o\t\8\9\d\w\h\b\c\m\k\z\v\x\g\r\t\u\2\q\h\t\u\k\x\t\d\4\d\n\p\u\c\o\k\t\y\s\9\n\w\2\o\n\r\m\p\j\u\u\v\t\p\s\n\0\w\7\t\t\v\d\u\i\7\r\u\f\e\3\c\u\a\q\v\2\g\4\m\k\n\h\i\j\a\b\n\g ]] 00:07:14.036 11:10:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.036 11:10:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:14.036 [2024-10-13 11:10:55.560267] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:14.036 [2024-10-13 11:10:55.560390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58450 ] 00:07:14.295 [2024-10-13 11:10:55.695979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.295 [2024-10-13 11:10:55.745073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.295  [2024-10-13T11:10:56.156Z] Copying: 512/512 [B] (average 500 kBps) 00:07:14.554 00:07:14.554 11:10:55 -- dd/posix.sh@93 -- # [[ q5h8cbovap3vxwtu7nccglndd6l9jsvs63oiynal2ia2fpcw03lvn6knr7gi3c7v2m5ybagvr0ko01qu19rw9wn9bqsog7hfur8a6plfb1fp33yavsggruy68krm3auxt6obturabq4k2mic35fz1324o201c1zil52hnfloa8nzds6uf7l751bn1tz3qxbura4z2r4t2lv0tq92krppe1i1bfva26rk231zhsudmbfebomabs3i5zczbu2qvep7hht57r7xb6mlsyg34k9kko00lwvw6nrfy5ta2k3oaa18wdh9z8g4z6wrw6seb1hky5ak5ghmu2iwi22orxjdsdbpcalqykax0tx6n2gswz6z0g41amsafddnnrbfdnkm11nv7w8qqpw1i7mcu0tc1qxv4s8a1jqhhl9wbfq7ot89dwhbcmkzvxgrtu2qhtukxtd4dnpucoktys9nw2onrmpjuuvtpsn0w7ttvdui7rufe3cuaqv2g4mknhijabng == \q\5\h\8\c\b\o\v\a\p\3\v\x\w\t\u\7\n\c\c\g\l\n\d\d\6\l\9\j\s\v\s\6\3\o\i\y\n\a\l\2\i\a\2\f\p\c\w\0\3\l\v\n\6\k\n\r\7\g\i\3\c\7\v\2\m\5\y\b\a\g\v\r\0\k\o\0\1\q\u\1\9\r\w\9\w\n\9\b\q\s\o\g\7\h\f\u\r\8\a\6\p\l\f\b\1\f\p\3\3\y\a\v\s\g\g\r\u\y\6\8\k\r\m\3\a\u\x\t\6\o\b\t\u\r\a\b\q\4\k\2\m\i\c\3\5\f\z\1\3\2\4\o\2\0\1\c\1\z\i\l\5\2\h\n\f\l\o\a\8\n\z\d\s\6\u\f\7\l\7\5\1\b\n\1\t\z\3\q\x\b\u\r\a\4\z\2\r\4\t\2\l\v\0\t\q\9\2\k\r\p\p\e\1\i\1\b\f\v\a\2\6\r\k\2\3\1\z\h\s\u\d\m\b\f\e\b\o\m\a\b\s\3\i\5\z\c\z\b\u\2\q\v\e\p\7\h\h\t\5\7\r\7\x\b\6\m\l\s\y\g\3\4\k\9\k\k\o\0\0\l\w\v\w\6\n\r\f\y\5\t\a\2\k\3\o\a\a\1\8\w\d\h\9\z\8\g\4\z\6\w\r\w\6\s\e\b\1\h\k\y\5\a\k\5\g\h\m\u\2\i\w\i\2\2\o\r\x\j\d\s\d\b\p\c\a\l\q\y\k\a\x\0\t\x\6\n\2\g\s\w\z\6\z\0\g\4\1\a\m\s\a\f\d\d\n\n\r\b\f\d\n\k\m\1\1\n\v\7\w\8\q\q\p\w\1\i\7\m\c\u\0\t\c\1\q\x\v\4\s\8\a\1\j\q\h\h\l\9\w\b\f\q\7\o\t\8\9\d\w\h\b\c\m\k\z\v\x\g\r\t\u\2\q\h\t\u\k\x\t\d\4\d\n\p\u\c\o\k\t\y\s\9\n\w\2\o\n\r\m\p\j\u\u\v\t\p\s\n\0\w\7\t\t\v\d\u\i\7\r\u\f\e\3\c\u\a\q\v\2\g\4\m\k\n\h\i\j\a\b\n\g ]] 00:07:14.554 11:10:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.554 11:10:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:14.554 [2024-10-13 11:10:56.008599] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:14.554 [2024-10-13 11:10:56.008691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58459 ] 00:07:14.554 [2024-10-13 11:10:56.142433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.813 [2024-10-13 11:10:56.190744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.813  [2024-10-13T11:10:56.415Z] Copying: 512/512 [B] (average 500 kBps) 00:07:14.813 00:07:14.813 11:10:56 -- dd/posix.sh@93 -- # [[ q5h8cbovap3vxwtu7nccglndd6l9jsvs63oiynal2ia2fpcw03lvn6knr7gi3c7v2m5ybagvr0ko01qu19rw9wn9bqsog7hfur8a6plfb1fp33yavsggruy68krm3auxt6obturabq4k2mic35fz1324o201c1zil52hnfloa8nzds6uf7l751bn1tz3qxbura4z2r4t2lv0tq92krppe1i1bfva26rk231zhsudmbfebomabs3i5zczbu2qvep7hht57r7xb6mlsyg34k9kko00lwvw6nrfy5ta2k3oaa18wdh9z8g4z6wrw6seb1hky5ak5ghmu2iwi22orxjdsdbpcalqykax0tx6n2gswz6z0g41amsafddnnrbfdnkm11nv7w8qqpw1i7mcu0tc1qxv4s8a1jqhhl9wbfq7ot89dwhbcmkzvxgrtu2qhtukxtd4dnpucoktys9nw2onrmpjuuvtpsn0w7ttvdui7rufe3cuaqv2g4mknhijabng == \q\5\h\8\c\b\o\v\a\p\3\v\x\w\t\u\7\n\c\c\g\l\n\d\d\6\l\9\j\s\v\s\6\3\o\i\y\n\a\l\2\i\a\2\f\p\c\w\0\3\l\v\n\6\k\n\r\7\g\i\3\c\7\v\2\m\5\y\b\a\g\v\r\0\k\o\0\1\q\u\1\9\r\w\9\w\n\9\b\q\s\o\g\7\h\f\u\r\8\a\6\p\l\f\b\1\f\p\3\3\y\a\v\s\g\g\r\u\y\6\8\k\r\m\3\a\u\x\t\6\o\b\t\u\r\a\b\q\4\k\2\m\i\c\3\5\f\z\1\3\2\4\o\2\0\1\c\1\z\i\l\5\2\h\n\f\l\o\a\8\n\z\d\s\6\u\f\7\l\7\5\1\b\n\1\t\z\3\q\x\b\u\r\a\4\z\2\r\4\t\2\l\v\0\t\q\9\2\k\r\p\p\e\1\i\1\b\f\v\a\2\6\r\k\2\3\1\z\h\s\u\d\m\b\f\e\b\o\m\a\b\s\3\i\5\z\c\z\b\u\2\q\v\e\p\7\h\h\t\5\7\r\7\x\b\6\m\l\s\y\g\3\4\k\9\k\k\o\0\0\l\w\v\w\6\n\r\f\y\5\t\a\2\k\3\o\a\a\1\8\w\d\h\9\z\8\g\4\z\6\w\r\w\6\s\e\b\1\h\k\y\5\a\k\5\g\h\m\u\2\i\w\i\2\2\o\r\x\j\d\s\d\b\p\c\a\l\q\y\k\a\x\0\t\x\6\n\2\g\s\w\z\6\z\0\g\4\1\a\m\s\a\f\d\d\n\n\r\b\f\d\n\k\m\1\1\n\v\7\w\8\q\q\p\w\1\i\7\m\c\u\0\t\c\1\q\x\v\4\s\8\a\1\j\q\h\h\l\9\w\b\f\q\7\o\t\8\9\d\w\h\b\c\m\k\z\v\x\g\r\t\u\2\q\h\t\u\k\x\t\d\4\d\n\p\u\c\o\k\t\y\s\9\n\w\2\o\n\r\m\p\j\u\u\v\t\p\s\n\0\w\7\t\t\v\d\u\i\7\r\u\f\e\3\c\u\a\q\v\2\g\4\m\k\n\h\i\j\a\b\n\g ]] 00:07:14.813 11:10:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.813 11:10:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:15.072 [2024-10-13 11:10:56.455246] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:15.072 [2024-10-13 11:10:56.455378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58461 ] 00:07:15.072 [2024-10-13 11:10:56.589892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.072 [2024-10-13 11:10:56.637421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.331  [2024-10-13T11:10:56.933Z] Copying: 512/512 [B] (average 250 kBps) 00:07:15.331 00:07:15.331 11:10:56 -- dd/posix.sh@93 -- # [[ q5h8cbovap3vxwtu7nccglndd6l9jsvs63oiynal2ia2fpcw03lvn6knr7gi3c7v2m5ybagvr0ko01qu19rw9wn9bqsog7hfur8a6plfb1fp33yavsggruy68krm3auxt6obturabq4k2mic35fz1324o201c1zil52hnfloa8nzds6uf7l751bn1tz3qxbura4z2r4t2lv0tq92krppe1i1bfva26rk231zhsudmbfebomabs3i5zczbu2qvep7hht57r7xb6mlsyg34k9kko00lwvw6nrfy5ta2k3oaa18wdh9z8g4z6wrw6seb1hky5ak5ghmu2iwi22orxjdsdbpcalqykax0tx6n2gswz6z0g41amsafddnnrbfdnkm11nv7w8qqpw1i7mcu0tc1qxv4s8a1jqhhl9wbfq7ot89dwhbcmkzvxgrtu2qhtukxtd4dnpucoktys9nw2onrmpjuuvtpsn0w7ttvdui7rufe3cuaqv2g4mknhijabng == \q\5\h\8\c\b\o\v\a\p\3\v\x\w\t\u\7\n\c\c\g\l\n\d\d\6\l\9\j\s\v\s\6\3\o\i\y\n\a\l\2\i\a\2\f\p\c\w\0\3\l\v\n\6\k\n\r\7\g\i\3\c\7\v\2\m\5\y\b\a\g\v\r\0\k\o\0\1\q\u\1\9\r\w\9\w\n\9\b\q\s\o\g\7\h\f\u\r\8\a\6\p\l\f\b\1\f\p\3\3\y\a\v\s\g\g\r\u\y\6\8\k\r\m\3\a\u\x\t\6\o\b\t\u\r\a\b\q\4\k\2\m\i\c\3\5\f\z\1\3\2\4\o\2\0\1\c\1\z\i\l\5\2\h\n\f\l\o\a\8\n\z\d\s\6\u\f\7\l\7\5\1\b\n\1\t\z\3\q\x\b\u\r\a\4\z\2\r\4\t\2\l\v\0\t\q\9\2\k\r\p\p\e\1\i\1\b\f\v\a\2\6\r\k\2\3\1\z\h\s\u\d\m\b\f\e\b\o\m\a\b\s\3\i\5\z\c\z\b\u\2\q\v\e\p\7\h\h\t\5\7\r\7\x\b\6\m\l\s\y\g\3\4\k\9\k\k\o\0\0\l\w\v\w\6\n\r\f\y\5\t\a\2\k\3\o\a\a\1\8\w\d\h\9\z\8\g\4\z\6\w\r\w\6\s\e\b\1\h\k\y\5\a\k\5\g\h\m\u\2\i\w\i\2\2\o\r\x\j\d\s\d\b\p\c\a\l\q\y\k\a\x\0\t\x\6\n\2\g\s\w\z\6\z\0\g\4\1\a\m\s\a\f\d\d\n\n\r\b\f\d\n\k\m\1\1\n\v\7\w\8\q\q\p\w\1\i\7\m\c\u\0\t\c\1\q\x\v\4\s\8\a\1\j\q\h\h\l\9\w\b\f\q\7\o\t\8\9\d\w\h\b\c\m\k\z\v\x\g\r\t\u\2\q\h\t\u\k\x\t\d\4\d\n\p\u\c\o\k\t\y\s\9\n\w\2\o\n\r\m\p\j\u\u\v\t\p\s\n\0\w\7\t\t\v\d\u\i\7\r\u\f\e\3\c\u\a\q\v\2\g\4\m\k\n\h\i\j\a\b\n\g ]] 00:07:15.331 00:07:15.331 real 0m3.711s 00:07:15.331 user 0m2.018s 00:07:15.331 sys 0m0.703s 00:07:15.331 ************************************ 00:07:15.331 END TEST dd_flags_misc_forced_aio 00:07:15.331 ************************************ 00:07:15.331 11:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.331 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:07:15.331 11:10:56 -- dd/posix.sh@1 -- # cleanup 00:07:15.331 11:10:56 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:15.331 11:10:56 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:15.331 00:07:15.331 real 0m17.712s 00:07:15.331 user 0m8.551s 00:07:15.331 sys 0m3.322s 00:07:15.331 11:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.331 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:07:15.331 ************************************ 00:07:15.331 END TEST spdk_dd_posix 00:07:15.331 ************************************ 00:07:15.594 11:10:56 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:15.594 11:10:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.594 11:10:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.594 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:07:15.594 ************************************ 00:07:15.594 START TEST spdk_dd_malloc 00:07:15.594 ************************************ 00:07:15.594 11:10:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:15.594 * Looking for test storage... 00:07:15.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:15.594 11:10:57 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.594 11:10:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.594 11:10:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.594 11:10:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.595 11:10:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.595 11:10:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.595 11:10:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.595 11:10:57 -- paths/export.sh@5 -- # export PATH 00:07:15.595 11:10:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.595 11:10:57 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:15.595 11:10:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.595 11:10:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.595 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:07:15.595 ************************************ 00:07:15.595 START TEST dd_malloc_copy 00:07:15.595 ************************************ 00:07:15.595 11:10:57 -- common/autotest_common.sh@1104 -- # malloc_copy 00:07:15.595 11:10:57 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:15.595 11:10:57 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:15.595 11:10:57 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:15.595 11:10:57 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:15.595 11:10:57 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:15.595 11:10:57 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:15.595 11:10:57 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:15.595 11:10:57 -- dd/malloc.sh@28 -- # gen_conf 00:07:15.595 11:10:57 -- dd/common.sh@31 -- # xtrace_disable 00:07:15.595 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:07:15.595 [2024-10-13 11:10:57.083152] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:15.595 [2024-10-13 11:10:57.083276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58534 ] 00:07:15.595 { 00:07:15.595 "subsystems": [ 00:07:15.595 { 00:07:15.595 "subsystem": "bdev", 00:07:15.595 "config": [ 00:07:15.595 { 00:07:15.595 "params": { 00:07:15.595 "block_size": 512, 00:07:15.595 "num_blocks": 1048576, 00:07:15.595 "name": "malloc0" 00:07:15.595 }, 00:07:15.595 "method": "bdev_malloc_create" 00:07:15.595 }, 00:07:15.595 { 00:07:15.595 "params": { 00:07:15.595 "block_size": 512, 00:07:15.595 "num_blocks": 1048576, 00:07:15.595 "name": "malloc1" 00:07:15.595 }, 00:07:15.595 "method": "bdev_malloc_create" 00:07:15.595 }, 00:07:15.595 { 00:07:15.595 "method": "bdev_wait_for_examine" 00:07:15.595 } 00:07:15.595 ] 00:07:15.595 } 00:07:15.595 ] 00:07:15.595 } 00:07:15.904 [2024-10-13 11:10:57.218248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.904 [2024-10-13 11:10:57.267439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.285  [2024-10-13T11:10:59.824Z] Copying: 244/512 [MB] (244 MBps) [2024-10-13T11:10:59.824Z] Copying: 487/512 [MB] (242 MBps) [2024-10-13T11:11:00.084Z] Copying: 512/512 [MB] (average 242 MBps) 00:07:18.482 00:07:18.482 11:10:59 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:18.482 11:10:59 -- dd/malloc.sh@33 -- # gen_conf 00:07:18.482 11:10:59 -- dd/common.sh@31 -- # xtrace_disable 00:07:18.482 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:07:18.482 [2024-10-13 11:10:59.991484] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:18.482 [2024-10-13 11:10:59.991581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58576 ] 00:07:18.482 { 00:07:18.482 "subsystems": [ 00:07:18.482 { 00:07:18.482 "subsystem": "bdev", 00:07:18.482 "config": [ 00:07:18.482 { 00:07:18.482 "params": { 00:07:18.482 "block_size": 512, 00:07:18.482 "num_blocks": 1048576, 00:07:18.482 "name": "malloc0" 00:07:18.482 }, 00:07:18.482 "method": "bdev_malloc_create" 00:07:18.482 }, 00:07:18.482 { 00:07:18.482 "params": { 00:07:18.482 "block_size": 512, 00:07:18.482 "num_blocks": 1048576, 00:07:18.482 "name": "malloc1" 00:07:18.482 }, 00:07:18.482 "method": "bdev_malloc_create" 00:07:18.482 }, 00:07:18.482 { 00:07:18.482 "method": "bdev_wait_for_examine" 00:07:18.482 } 00:07:18.482 ] 00:07:18.482 } 00:07:18.482 ] 00:07:18.482 } 00:07:18.741 [2024-10-13 11:11:00.127692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.741 [2024-10-13 11:11:00.176510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.119  [2024-10-13T11:11:02.658Z] Copying: 243/512 [MB] (243 MBps) [2024-10-13T11:11:02.658Z] Copying: 487/512 [MB] (244 MBps) [2024-10-13T11:11:02.916Z] Copying: 512/512 [MB] (average 242 MBps) 00:07:21.314 00:07:21.314 00:07:21.314 real 0m5.815s 00:07:21.314 user 0m5.204s 00:07:21.314 sys 0m0.466s 00:07:21.314 11:11:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.314 11:11:02 -- common/autotest_common.sh@10 -- # set +x 00:07:21.314 ************************************ 00:07:21.314 END TEST dd_malloc_copy 00:07:21.314 ************************************ 00:07:21.314 00:07:21.314 real 0m5.942s 00:07:21.314 user 0m5.261s 00:07:21.314 sys 0m0.536s 00:07:21.314 11:11:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.314 11:11:02 -- common/autotest_common.sh@10 -- # set +x 00:07:21.314 ************************************ 00:07:21.314 END TEST spdk_dd_malloc 00:07:21.314 ************************************ 00:07:21.573 11:11:02 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:07:21.573 11:11:02 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:21.573 11:11:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.573 11:11:02 -- common/autotest_common.sh@10 -- # set +x 00:07:21.573 ************************************ 00:07:21.573 START TEST spdk_dd_bdev_to_bdev 00:07:21.573 ************************************ 00:07:21.573 11:11:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:07:21.573 * Looking for test storage... 00:07:21.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:21.573 11:11:03 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.573 11:11:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.573 11:11:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.573 11:11:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.573 11:11:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.574 11:11:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.574 11:11:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.574 11:11:03 -- paths/export.sh@5 -- # export PATH 00:07:21.574 11:11:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:21.574 11:11:03 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:21.574 11:11:03 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:21.574 11:11:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.574 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:07:21.574 ************************************ 00:07:21.574 START TEST dd_inflate_file 00:07:21.574 ************************************ 00:07:21.574 11:11:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:21.574 [2024-10-13 11:11:03.072503] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:21.574 [2024-10-13 11:11:03.072608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58674 ] 00:07:21.833 [2024-10-13 11:11:03.204727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.833 [2024-10-13 11:11:03.252203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.833  [2024-10-13T11:11:03.694Z] Copying: 64/64 [MB] (average 2064 MBps) 00:07:22.092 00:07:22.092 00:07:22.092 real 0m0.478s 00:07:22.092 user 0m0.241s 00:07:22.092 sys 0m0.121s 00:07:22.092 11:11:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.092 ************************************ 00:07:22.092 END TEST dd_inflate_file 00:07:22.092 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:07:22.092 ************************************ 00:07:22.092 11:11:03 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:22.092 11:11:03 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:22.092 11:11:03 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:22.092 11:11:03 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:22.092 11:11:03 -- dd/common.sh@31 -- # xtrace_disable 00:07:22.092 11:11:03 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:22.092 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:07:22.092 11:11:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.092 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:07:22.092 ************************************ 00:07:22.092 START TEST dd_copy_to_out_bdev 00:07:22.092 ************************************ 00:07:22.092 11:11:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:22.092 { 00:07:22.092 "subsystems": [ 00:07:22.092 { 00:07:22.092 "subsystem": "bdev", 00:07:22.092 "config": [ 00:07:22.092 { 00:07:22.092 "params": { 00:07:22.092 "trtype": "pcie", 00:07:22.092 "traddr": "0000:00:06.0", 00:07:22.092 "name": "Nvme0" 00:07:22.092 }, 00:07:22.092 "method": "bdev_nvme_attach_controller" 00:07:22.092 }, 00:07:22.092 { 00:07:22.092 "params": { 00:07:22.092 "trtype": "pcie", 00:07:22.092 "traddr": "0000:00:07.0", 00:07:22.092 "name": "Nvme1" 00:07:22.092 }, 00:07:22.092 "method": "bdev_nvme_attach_controller" 00:07:22.092 }, 00:07:22.092 { 00:07:22.092 "method": "bdev_wait_for_examine" 00:07:22.092 } 00:07:22.092 ] 00:07:22.092 } 00:07:22.092 ] 00:07:22.092 } 00:07:22.092 [2024-10-13 11:11:03.612649] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:22.092 [2024-10-13 11:11:03.612805] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58700 ] 00:07:22.351 [2024-10-13 11:11:03.746454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.351 [2024-10-13 11:11:03.794656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.728  [2024-10-13T11:11:05.330Z] Copying: 48/64 [MB] (48 MBps) [2024-10-13T11:11:05.589Z] Copying: 64/64 [MB] (average 48 MBps) 00:07:23.987 00:07:23.987 00:07:23.987 real 0m1.953s 00:07:23.987 user 0m1.715s 00:07:23.987 sys 0m0.167s 00:07:23.987 11:11:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.987 ************************************ 00:07:23.987 11:11:05 -- common/autotest_common.sh@10 -- # set +x 00:07:23.987 END TEST dd_copy_to_out_bdev 00:07:23.987 ************************************ 00:07:23.987 11:11:05 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:23.987 11:11:05 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:23.987 11:11:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:23.987 11:11:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.987 11:11:05 -- common/autotest_common.sh@10 -- # set +x 00:07:23.987 ************************************ 00:07:23.987 START TEST dd_offset_magic 00:07:23.987 ************************************ 00:07:23.987 11:11:05 -- common/autotest_common.sh@1104 -- # offset_magic 00:07:23.987 11:11:05 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:23.987 11:11:05 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:23.987 11:11:05 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:23.987 11:11:05 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:23.987 11:11:05 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:23.987 11:11:05 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:23.988 11:11:05 -- dd/common.sh@31 -- # xtrace_disable 00:07:23.988 11:11:05 -- common/autotest_common.sh@10 -- # set +x 00:07:24.247 [2024-10-13 11:11:05.610525] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:24.247 [2024-10-13 11:11:05.610636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58744 ] 00:07:24.247 { 00:07:24.247 "subsystems": [ 00:07:24.247 { 00:07:24.247 "subsystem": "bdev", 00:07:24.247 "config": [ 00:07:24.247 { 00:07:24.247 "params": { 00:07:24.247 "trtype": "pcie", 00:07:24.247 "traddr": "0000:00:06.0", 00:07:24.247 "name": "Nvme0" 00:07:24.247 }, 00:07:24.247 "method": "bdev_nvme_attach_controller" 00:07:24.247 }, 00:07:24.247 { 00:07:24.247 "params": { 00:07:24.247 "trtype": "pcie", 00:07:24.247 "traddr": "0000:00:07.0", 00:07:24.247 "name": "Nvme1" 00:07:24.247 }, 00:07:24.247 "method": "bdev_nvme_attach_controller" 00:07:24.247 }, 00:07:24.247 { 00:07:24.247 "method": "bdev_wait_for_examine" 00:07:24.247 } 00:07:24.247 ] 00:07:24.247 } 00:07:24.247 ] 00:07:24.247 } 00:07:24.247 [2024-10-13 11:11:05.747755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.247 [2024-10-13 11:11:05.797102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.506  [2024-10-13T11:11:06.367Z] Copying: 65/65 [MB] (average 915 MBps) 00:07:24.765 00:07:24.765 11:11:06 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:24.765 11:11:06 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:24.765 11:11:06 -- dd/common.sh@31 -- # xtrace_disable 00:07:24.765 11:11:06 -- common/autotest_common.sh@10 -- # set +x 00:07:24.765 [2024-10-13 11:11:06.326180] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:24.765 [2024-10-13 11:11:06.326967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58764 ] 00:07:24.765 { 00:07:24.765 "subsystems": [ 00:07:24.765 { 00:07:24.765 "subsystem": "bdev", 00:07:24.765 "config": [ 00:07:24.765 { 00:07:24.765 "params": { 00:07:24.765 "trtype": "pcie", 00:07:24.765 "traddr": "0000:00:06.0", 00:07:24.765 "name": "Nvme0" 00:07:24.765 }, 00:07:24.765 "method": "bdev_nvme_attach_controller" 00:07:24.765 }, 00:07:24.765 { 00:07:24.765 "params": { 00:07:24.765 "trtype": "pcie", 00:07:24.765 "traddr": "0000:00:07.0", 00:07:24.765 "name": "Nvme1" 00:07:24.765 }, 00:07:24.765 "method": "bdev_nvme_attach_controller" 00:07:24.765 }, 00:07:24.765 { 00:07:24.765 "method": "bdev_wait_for_examine" 00:07:24.765 } 00:07:24.765 ] 00:07:24.765 } 00:07:24.765 ] 00:07:24.765 } 00:07:25.024 [2024-10-13 11:11:06.467638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.024 [2024-10-13 11:11:06.514962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.283  [2024-10-13T11:11:06.886Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:25.284 00:07:25.543 11:11:06 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:25.543 11:11:06 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:25.543 11:11:06 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:25.543 11:11:06 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:25.543 11:11:06 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:25.543 11:11:06 -- dd/common.sh@31 -- # xtrace_disable 00:07:25.543 11:11:06 -- common/autotest_common.sh@10 -- # set +x 00:07:25.543 [2024-10-13 11:11:06.936295] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:25.543 [2024-10-13 11:11:06.936399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58779 ] 00:07:25.543 { 00:07:25.543 "subsystems": [ 00:07:25.543 { 00:07:25.543 "subsystem": "bdev", 00:07:25.543 "config": [ 00:07:25.543 { 00:07:25.543 "params": { 00:07:25.543 "trtype": "pcie", 00:07:25.543 "traddr": "0000:00:06.0", 00:07:25.543 "name": "Nvme0" 00:07:25.543 }, 00:07:25.543 "method": "bdev_nvme_attach_controller" 00:07:25.543 }, 00:07:25.543 { 00:07:25.543 "params": { 00:07:25.543 "trtype": "pcie", 00:07:25.543 "traddr": "0000:00:07.0", 00:07:25.543 "name": "Nvme1" 00:07:25.543 }, 00:07:25.543 "method": "bdev_nvme_attach_controller" 00:07:25.543 }, 00:07:25.543 { 00:07:25.543 "method": "bdev_wait_for_examine" 00:07:25.543 } 00:07:25.543 ] 00:07:25.543 } 00:07:25.543 ] 00:07:25.543 } 00:07:25.543 [2024-10-13 11:11:07.072294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.543 [2024-10-13 11:11:07.120218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.802  [2024-10-13T11:11:07.663Z] Copying: 65/65 [MB] (average 1031 MBps) 00:07:26.061 00:07:26.061 11:11:07 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:26.061 11:11:07 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:26.061 11:11:07 -- dd/common.sh@31 -- # xtrace_disable 00:07:26.061 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:07:26.061 [2024-10-13 11:11:07.629534] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:26.061 [2024-10-13 11:11:07.629668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58793 ] 00:07:26.061 { 00:07:26.061 "subsystems": [ 00:07:26.061 { 00:07:26.061 "subsystem": "bdev", 00:07:26.061 "config": [ 00:07:26.061 { 00:07:26.061 "params": { 00:07:26.061 "trtype": "pcie", 00:07:26.061 "traddr": "0000:00:06.0", 00:07:26.061 "name": "Nvme0" 00:07:26.061 }, 00:07:26.061 "method": "bdev_nvme_attach_controller" 00:07:26.061 }, 00:07:26.061 { 00:07:26.061 "params": { 00:07:26.061 "trtype": "pcie", 00:07:26.061 "traddr": "0000:00:07.0", 00:07:26.061 "name": "Nvme1" 00:07:26.061 }, 00:07:26.061 "method": "bdev_nvme_attach_controller" 00:07:26.061 }, 00:07:26.061 { 00:07:26.061 "method": "bdev_wait_for_examine" 00:07:26.061 } 00:07:26.061 ] 00:07:26.061 } 00:07:26.061 ] 00:07:26.061 } 00:07:26.321 [2024-10-13 11:11:07.771833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.321 [2024-10-13 11:11:07.823813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.579  [2024-10-13T11:11:08.181Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:26.579 00:07:26.579 11:11:08 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:26.579 11:11:08 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:26.579 00:07:26.579 real 0m2.602s 00:07:26.579 user 0m1.997s 00:07:26.579 sys 0m0.445s 00:07:26.579 11:11:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.579 ************************************ 00:07:26.579 END TEST dd_offset_magic 00:07:26.579 ************************************ 00:07:26.579 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:07:26.837 11:11:08 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:26.837 11:11:08 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:26.837 11:11:08 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:26.837 11:11:08 -- dd/common.sh@11 -- # local nvme_ref= 00:07:26.837 11:11:08 -- dd/common.sh@12 -- # local size=4194330 00:07:26.837 11:11:08 -- dd/common.sh@14 -- # local bs=1048576 00:07:26.837 11:11:08 -- dd/common.sh@15 -- # local count=5 00:07:26.837 11:11:08 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:26.837 11:11:08 -- dd/common.sh@18 -- # gen_conf 00:07:26.837 11:11:08 -- dd/common.sh@31 -- # xtrace_disable 00:07:26.837 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:07:26.837 { 00:07:26.837 "subsystems": [ 00:07:26.837 { 00:07:26.837 "subsystem": "bdev", 00:07:26.837 "config": [ 00:07:26.837 { 00:07:26.837 "params": { 00:07:26.837 "trtype": "pcie", 00:07:26.837 "traddr": "0000:00:06.0", 00:07:26.837 "name": "Nvme0" 00:07:26.837 }, 00:07:26.837 "method": "bdev_nvme_attach_controller" 00:07:26.837 }, 00:07:26.837 { 00:07:26.837 "params": { 00:07:26.837 "trtype": "pcie", 00:07:26.837 "traddr": "0000:00:07.0", 00:07:26.837 "name": "Nvme1" 00:07:26.837 }, 00:07:26.838 "method": "bdev_nvme_attach_controller" 00:07:26.838 }, 00:07:26.838 { 00:07:26.838 "method": "bdev_wait_for_examine" 00:07:26.838 } 00:07:26.838 ] 00:07:26.838 } 00:07:26.838 ] 00:07:26.838 } 00:07:26.838 [2024-10-13 11:11:08.278982] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:26.838 [2024-10-13 11:11:08.279100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58823 ] 00:07:26.838 [2024-10-13 11:11:08.423267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.096 [2024-10-13 11:11:08.472180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.096  [2024-10-13T11:11:08.957Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:27.355 00:07:27.355 11:11:08 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:27.355 11:11:08 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:27.355 11:11:08 -- dd/common.sh@11 -- # local nvme_ref= 00:07:27.355 11:11:08 -- dd/common.sh@12 -- # local size=4194330 00:07:27.355 11:11:08 -- dd/common.sh@14 -- # local bs=1048576 00:07:27.355 11:11:08 -- dd/common.sh@15 -- # local count=5 00:07:27.355 11:11:08 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:27.355 11:11:08 -- dd/common.sh@18 -- # gen_conf 00:07:27.355 11:11:08 -- dd/common.sh@31 -- # xtrace_disable 00:07:27.355 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:07:27.355 [2024-10-13 11:11:08.880825] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:27.355 [2024-10-13 11:11:08.880930] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58843 ] 00:07:27.355 { 00:07:27.355 "subsystems": [ 00:07:27.355 { 00:07:27.355 "subsystem": "bdev", 00:07:27.356 "config": [ 00:07:27.356 { 00:07:27.356 "params": { 00:07:27.356 "trtype": "pcie", 00:07:27.356 "traddr": "0000:00:06.0", 00:07:27.356 "name": "Nvme0" 00:07:27.356 }, 00:07:27.356 "method": "bdev_nvme_attach_controller" 00:07:27.356 }, 00:07:27.356 { 00:07:27.356 "params": { 00:07:27.356 "trtype": "pcie", 00:07:27.356 "traddr": "0000:00:07.0", 00:07:27.356 "name": "Nvme1" 00:07:27.356 }, 00:07:27.356 "method": "bdev_nvme_attach_controller" 00:07:27.356 }, 00:07:27.356 { 00:07:27.356 "method": "bdev_wait_for_examine" 00:07:27.356 } 00:07:27.356 ] 00:07:27.356 } 00:07:27.356 ] 00:07:27.356 } 00:07:27.614 [2024-10-13 11:11:09.020032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.614 [2024-10-13 11:11:09.074697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.873  [2024-10-13T11:11:09.475Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:07:27.873 00:07:27.873 11:11:09 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:27.873 00:07:27.873 real 0m6.514s 00:07:27.873 user 0m4.917s 00:07:27.873 sys 0m1.121s 00:07:27.873 11:11:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.873 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:07:27.873 ************************************ 00:07:27.873 END TEST spdk_dd_bdev_to_bdev 00:07:27.873 ************************************ 00:07:28.132 11:11:09 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:28.132 11:11:09 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:28.132 11:11:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:28.132 11:11:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.132 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:07:28.132 ************************************ 00:07:28.132 START TEST spdk_dd_uring 00:07:28.132 ************************************ 00:07:28.132 11:11:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:28.132 * Looking for test storage... 00:07:28.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:28.132 11:11:09 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:28.132 11:11:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.132 11:11:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.132 11:11:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.132 11:11:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.132 11:11:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.132 11:11:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.132 11:11:09 -- paths/export.sh@5 -- # export PATH 00:07:28.133 11:11:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.133 11:11:09 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:28.133 11:11:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:28.133 11:11:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.133 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:07:28.133 ************************************ 00:07:28.133 START TEST dd_uring_copy 00:07:28.133 ************************************ 00:07:28.133 11:11:09 -- common/autotest_common.sh@1104 -- # uring_zram_copy 00:07:28.133 11:11:09 -- dd/uring.sh@15 -- # local zram_dev_id 00:07:28.133 11:11:09 -- dd/uring.sh@16 -- # local magic 00:07:28.133 11:11:09 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:28.133 11:11:09 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:28.133 11:11:09 -- dd/uring.sh@19 -- # local verify_magic 00:07:28.133 11:11:09 -- dd/uring.sh@21 -- # init_zram 00:07:28.133 11:11:09 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:28.133 11:11:09 -- dd/common.sh@164 -- # return 00:07:28.133 11:11:09 -- dd/uring.sh@22 -- # create_zram_dev 00:07:28.133 11:11:09 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:28.133 11:11:09 -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:28.133 11:11:09 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:28.133 11:11:09 -- dd/common.sh@181 -- # local id=1 00:07:28.133 11:11:09 -- dd/common.sh@182 -- # local size=512M 00:07:28.133 11:11:09 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:28.133 11:11:09 -- dd/common.sh@186 -- # echo 512M 00:07:28.133 11:11:09 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:28.133 11:11:09 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:28.133 11:11:09 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:28.133 11:11:09 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:28.133 11:11:09 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:28.133 11:11:09 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:28.133 11:11:09 -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:28.133 11:11:09 -- dd/common.sh@98 -- # xtrace_disable 00:07:28.133 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:07:28.133 11:11:09 -- dd/uring.sh@41 -- # magic=f8v4kfjkpmh8u0znxd3qcdfgdvep3d6gpk5r37w11histcpq5amy9xc6f7wo3wqvqkamfxyzjg5jjsmcd9nzb6vmbuhc9jxcqi5iyyfxujai11vlpob4afrv0ggyvhm2b86dt10loagnd0dm6tc1tot9tyaecd1brcj9622z0chv96gyw9eq102hgqzwf9d3ens6r695e3whxgomdi9uussxczuo0lvt7ljekpn8srbevylxp7ydzquowt8fz3hfbmrjwijgb0pp5vy6wtyj4iqegu5idm8g85r8jgv1ta0yypfwikecuodyomnmna4is11ilcx0ige8wm4aoranqs1secoz2n69c7d0tjnnuct5qvu6o0iv8y0edob8ze1oyi7g1o03t2mbcs9ua6obsj0msjfeyw59meifa4iptywu7ibwvl9zd6ywz77b85xxn352i4oxsolbogozm16917j6dm3y6zukh6x6n3f74a0yb3ub8dl2u0y99mcizvltkx6r43x1r9su9qf4e08oee8019lxesgtwmu8ewrhc12dxfonrdr9t1mcb0bxj7tmf3d9h1k363ikabegihwuduzwf6kga8l34k6qkcmb8i78soodyj4ara1fmwjqsfsp4gbxlgjkehs8zkv6g0p2x151ijymu6nf9zvyvzqblc2sv39ndibkrtf94xvqo7u4rcghmy71lyb36mxi8ow1fc3nrvjd9750x1sf6ra5rzdeihg9ai1moh2od8s518roslyfl7uz8sr2c3ddcrmst3cv1p86i9zipjpn71nfxvovwnzjv8j5mf4c45zhvomms73oj0ldkmx8eqsnsplx9d1oz8zenebivpxkau3yx1az72qh8xibw2u1qxvpjpueez74o21m5h7819uk1p23xy17ch7u08w6hixbi964byp9rlkcuoo2e76nyoarzayb6ywfqx2n8zvpc1yhgx3jm6bd6cixq0z9bpxhwvn1w5fno3vfk5ygznogj00d618v 00:07:28.133 11:11:09 -- dd/uring.sh@42 -- # echo f8v4kfjkpmh8u0znxd3qcdfgdvep3d6gpk5r37w11histcpq5amy9xc6f7wo3wqvqkamfxyzjg5jjsmcd9nzb6vmbuhc9jxcqi5iyyfxujai11vlpob4afrv0ggyvhm2b86dt10loagnd0dm6tc1tot9tyaecd1brcj9622z0chv96gyw9eq102hgqzwf9d3ens6r695e3whxgomdi9uussxczuo0lvt7ljekpn8srbevylxp7ydzquowt8fz3hfbmrjwijgb0pp5vy6wtyj4iqegu5idm8g85r8jgv1ta0yypfwikecuodyomnmna4is11ilcx0ige8wm4aoranqs1secoz2n69c7d0tjnnuct5qvu6o0iv8y0edob8ze1oyi7g1o03t2mbcs9ua6obsj0msjfeyw59meifa4iptywu7ibwvl9zd6ywz77b85xxn352i4oxsolbogozm16917j6dm3y6zukh6x6n3f74a0yb3ub8dl2u0y99mcizvltkx6r43x1r9su9qf4e08oee8019lxesgtwmu8ewrhc12dxfonrdr9t1mcb0bxj7tmf3d9h1k363ikabegihwuduzwf6kga8l34k6qkcmb8i78soodyj4ara1fmwjqsfsp4gbxlgjkehs8zkv6g0p2x151ijymu6nf9zvyvzqblc2sv39ndibkrtf94xvqo7u4rcghmy71lyb36mxi8ow1fc3nrvjd9750x1sf6ra5rzdeihg9ai1moh2od8s518roslyfl7uz8sr2c3ddcrmst3cv1p86i9zipjpn71nfxvovwnzjv8j5mf4c45zhvomms73oj0ldkmx8eqsnsplx9d1oz8zenebivpxkau3yx1az72qh8xibw2u1qxvpjpueez74o21m5h7819uk1p23xy17ch7u08w6hixbi964byp9rlkcuoo2e76nyoarzayb6ywfqx2n8zvpc1yhgx3jm6bd6cixq0z9bpxhwvn1w5fno3vfk5ygznogj00d618v 00:07:28.133 11:11:09 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:28.133 [2024-10-13 11:11:09.651870] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:28.133 [2024-10-13 11:11:09.651967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58905 ] 00:07:28.392 [2024-10-13 11:11:09.779771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.392 [2024-10-13 11:11:09.833861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.650  [2024-10-13T11:11:10.821Z] Copying: 511/511 [MB] (average 1861 MBps) 00:07:29.219 00:07:29.219 11:11:10 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:29.219 11:11:10 -- dd/uring.sh@54 -- # gen_conf 00:07:29.219 11:11:10 -- dd/common.sh@31 -- # xtrace_disable 00:07:29.219 11:11:10 -- common/autotest_common.sh@10 -- # set +x 00:07:29.219 [2024-10-13 11:11:10.563468] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:29.219 [2024-10-13 11:11:10.563567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58919 ] 00:07:29.219 { 00:07:29.219 "subsystems": [ 00:07:29.219 { 00:07:29.219 "subsystem": "bdev", 00:07:29.219 "config": [ 00:07:29.219 { 00:07:29.219 "params": { 00:07:29.219 "block_size": 512, 00:07:29.219 "num_blocks": 1048576, 00:07:29.219 "name": "malloc0" 00:07:29.219 }, 00:07:29.219 "method": "bdev_malloc_create" 00:07:29.219 }, 00:07:29.219 { 00:07:29.219 "params": { 00:07:29.219 "filename": "/dev/zram1", 00:07:29.219 "name": "uring0" 00:07:29.219 }, 00:07:29.219 "method": "bdev_uring_create" 00:07:29.219 }, 00:07:29.219 { 00:07:29.219 "method": "bdev_wait_for_examine" 00:07:29.219 } 00:07:29.219 ] 00:07:29.219 } 00:07:29.219 ] 00:07:29.219 } 00:07:29.219 [2024-10-13 11:11:10.699710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.219 [2024-10-13 11:11:10.751840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.601  [2024-10-13T11:11:13.140Z] Copying: 222/512 [MB] (222 MBps) [2024-10-13T11:11:13.399Z] Copying: 449/512 [MB] (226 MBps) [2024-10-13T11:11:13.658Z] Copying: 512/512 [MB] (average 224 MBps) 00:07:32.056 00:07:32.056 11:11:13 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:32.056 11:11:13 -- dd/uring.sh@60 -- # gen_conf 00:07:32.056 11:11:13 -- dd/common.sh@31 -- # xtrace_disable 00:07:32.056 11:11:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.056 [2024-10-13 11:11:13.502093] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:32.056 [2024-10-13 11:11:13.502183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58957 ] 00:07:32.056 { 00:07:32.056 "subsystems": [ 00:07:32.056 { 00:07:32.056 "subsystem": "bdev", 00:07:32.056 "config": [ 00:07:32.056 { 00:07:32.056 "params": { 00:07:32.056 "block_size": 512, 00:07:32.056 "num_blocks": 1048576, 00:07:32.056 "name": "malloc0" 00:07:32.056 }, 00:07:32.056 "method": "bdev_malloc_create" 00:07:32.056 }, 00:07:32.056 { 00:07:32.056 "params": { 00:07:32.056 "filename": "/dev/zram1", 00:07:32.056 "name": "uring0" 00:07:32.056 }, 00:07:32.056 "method": "bdev_uring_create" 00:07:32.056 }, 00:07:32.056 { 00:07:32.056 "method": "bdev_wait_for_examine" 00:07:32.056 } 00:07:32.056 ] 00:07:32.056 } 00:07:32.056 ] 00:07:32.056 } 00:07:32.056 [2024-10-13 11:11:13.639817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.315 [2024-10-13 11:11:13.688918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.253  [2024-10-13T11:11:16.233Z] Copying: 133/512 [MB] (133 MBps) [2024-10-13T11:11:17.170Z] Copying: 262/512 [MB] (128 MBps) [2024-10-13T11:11:17.744Z] Copying: 416/512 [MB] (154 MBps) [2024-10-13T11:11:18.003Z] Copying: 512/512 [MB] (average 133 MBps) 00:07:36.401 00:07:36.401 11:11:17 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:36.402 11:11:17 -- dd/uring.sh@66 -- # [[ f8v4kfjkpmh8u0znxd3qcdfgdvep3d6gpk5r37w11histcpq5amy9xc6f7wo3wqvqkamfxyzjg5jjsmcd9nzb6vmbuhc9jxcqi5iyyfxujai11vlpob4afrv0ggyvhm2b86dt10loagnd0dm6tc1tot9tyaecd1brcj9622z0chv96gyw9eq102hgqzwf9d3ens6r695e3whxgomdi9uussxczuo0lvt7ljekpn8srbevylxp7ydzquowt8fz3hfbmrjwijgb0pp5vy6wtyj4iqegu5idm8g85r8jgv1ta0yypfwikecuodyomnmna4is11ilcx0ige8wm4aoranqs1secoz2n69c7d0tjnnuct5qvu6o0iv8y0edob8ze1oyi7g1o03t2mbcs9ua6obsj0msjfeyw59meifa4iptywu7ibwvl9zd6ywz77b85xxn352i4oxsolbogozm16917j6dm3y6zukh6x6n3f74a0yb3ub8dl2u0y99mcizvltkx6r43x1r9su9qf4e08oee8019lxesgtwmu8ewrhc12dxfonrdr9t1mcb0bxj7tmf3d9h1k363ikabegihwuduzwf6kga8l34k6qkcmb8i78soodyj4ara1fmwjqsfsp4gbxlgjkehs8zkv6g0p2x151ijymu6nf9zvyvzqblc2sv39ndibkrtf94xvqo7u4rcghmy71lyb36mxi8ow1fc3nrvjd9750x1sf6ra5rzdeihg9ai1moh2od8s518roslyfl7uz8sr2c3ddcrmst3cv1p86i9zipjpn71nfxvovwnzjv8j5mf4c45zhvomms73oj0ldkmx8eqsnsplx9d1oz8zenebivpxkau3yx1az72qh8xibw2u1qxvpjpueez74o21m5h7819uk1p23xy17ch7u08w6hixbi964byp9rlkcuoo2e76nyoarzayb6ywfqx2n8zvpc1yhgx3jm6bd6cixq0z9bpxhwvn1w5fno3vfk5ygznogj00d618v == \f\8\v\4\k\f\j\k\p\m\h\8\u\0\z\n\x\d\3\q\c\d\f\g\d\v\e\p\3\d\6\g\p\k\5\r\3\7\w\1\1\h\i\s\t\c\p\q\5\a\m\y\9\x\c\6\f\7\w\o\3\w\q\v\q\k\a\m\f\x\y\z\j\g\5\j\j\s\m\c\d\9\n\z\b\6\v\m\b\u\h\c\9\j\x\c\q\i\5\i\y\y\f\x\u\j\a\i\1\1\v\l\p\o\b\4\a\f\r\v\0\g\g\y\v\h\m\2\b\8\6\d\t\1\0\l\o\a\g\n\d\0\d\m\6\t\c\1\t\o\t\9\t\y\a\e\c\d\1\b\r\c\j\9\6\2\2\z\0\c\h\v\9\6\g\y\w\9\e\q\1\0\2\h\g\q\z\w\f\9\d\3\e\n\s\6\r\6\9\5\e\3\w\h\x\g\o\m\d\i\9\u\u\s\s\x\c\z\u\o\0\l\v\t\7\l\j\e\k\p\n\8\s\r\b\e\v\y\l\x\p\7\y\d\z\q\u\o\w\t\8\f\z\3\h\f\b\m\r\j\w\i\j\g\b\0\p\p\5\v\y\6\w\t\y\j\4\i\q\e\g\u\5\i\d\m\8\g\8\5\r\8\j\g\v\1\t\a\0\y\y\p\f\w\i\k\e\c\u\o\d\y\o\m\n\m\n\a\4\i\s\1\1\i\l\c\x\0\i\g\e\8\w\m\4\a\o\r\a\n\q\s\1\s\e\c\o\z\2\n\6\9\c\7\d\0\t\j\n\n\u\c\t\5\q\v\u\6\o\0\i\v\8\y\0\e\d\o\b\8\z\e\1\o\y\i\7\g\1\o\0\3\t\2\m\b\c\s\9\u\a\6\o\b\s\j\0\m\s\j\f\e\y\w\5\9\m\e\i\f\a\4\i\p\t\y\w\u\7\i\b\w\v\l\9\z\d\6\y\w\z\7\7\b\8\5\x\x\n\3\5\2\i\4\o\x\s\o\l\b\o\g\o\z\m\1\6\9\1\7\j\6\d\m\3\y\6\z\u\k\h\6\x\6\n\3\f\7\4\a\0\y\b\3\u\b\8\d\l\2\u\0\y\9\9\m\c\i\z\v\l\t\k\x\6\r\4\3\x\1\r\9\s\u\9\q\f\4\e\0\8\o\e\e\8\0\1\9\l\x\e\s\g\t\w\m\u\8\e\w\r\h\c\1\2\d\x\f\o\n\r\d\r\9\t\1\m\c\b\0\b\x\j\7\t\m\f\3\d\9\h\1\k\3\6\3\i\k\a\b\e\g\i\h\w\u\d\u\z\w\f\6\k\g\a\8\l\3\4\k\6\q\k\c\m\b\8\i\7\8\s\o\o\d\y\j\4\a\r\a\1\f\m\w\j\q\s\f\s\p\4\g\b\x\l\g\j\k\e\h\s\8\z\k\v\6\g\0\p\2\x\1\5\1\i\j\y\m\u\6\n\f\9\z\v\y\v\z\q\b\l\c\2\s\v\3\9\n\d\i\b\k\r\t\f\9\4\x\v\q\o\7\u\4\r\c\g\h\m\y\7\1\l\y\b\3\6\m\x\i\8\o\w\1\f\c\3\n\r\v\j\d\9\7\5\0\x\1\s\f\6\r\a\5\r\z\d\e\i\h\g\9\a\i\1\m\o\h\2\o\d\8\s\5\1\8\r\o\s\l\y\f\l\7\u\z\8\s\r\2\c\3\d\d\c\r\m\s\t\3\c\v\1\p\8\6\i\9\z\i\p\j\p\n\7\1\n\f\x\v\o\v\w\n\z\j\v\8\j\5\m\f\4\c\4\5\z\h\v\o\m\m\s\7\3\o\j\0\l\d\k\m\x\8\e\q\s\n\s\p\l\x\9\d\1\o\z\8\z\e\n\e\b\i\v\p\x\k\a\u\3\y\x\1\a\z\7\2\q\h\8\x\i\b\w\2\u\1\q\x\v\p\j\p\u\e\e\z\7\4\o\2\1\m\5\h\7\8\1\9\u\k\1\p\2\3\x\y\1\7\c\h\7\u\0\8\w\6\h\i\x\b\i\9\6\4\b\y\p\9\r\l\k\c\u\o\o\2\e\7\6\n\y\o\a\r\z\a\y\b\6\y\w\f\q\x\2\n\8\z\v\p\c\1\y\h\g\x\3\j\m\6\b\d\6\c\i\x\q\0\z\9\b\p\x\h\w\v\n\1\w\5\f\n\o\3\v\f\k\5\y\g\z\n\o\g\j\0\0\d\6\1\8\v ]] 00:07:36.402 11:11:17 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:36.402 11:11:17 -- dd/uring.sh@69 -- # [[ f8v4kfjkpmh8u0znxd3qcdfgdvep3d6gpk5r37w11histcpq5amy9xc6f7wo3wqvqkamfxyzjg5jjsmcd9nzb6vmbuhc9jxcqi5iyyfxujai11vlpob4afrv0ggyvhm2b86dt10loagnd0dm6tc1tot9tyaecd1brcj9622z0chv96gyw9eq102hgqzwf9d3ens6r695e3whxgomdi9uussxczuo0lvt7ljekpn8srbevylxp7ydzquowt8fz3hfbmrjwijgb0pp5vy6wtyj4iqegu5idm8g85r8jgv1ta0yypfwikecuodyomnmna4is11ilcx0ige8wm4aoranqs1secoz2n69c7d0tjnnuct5qvu6o0iv8y0edob8ze1oyi7g1o03t2mbcs9ua6obsj0msjfeyw59meifa4iptywu7ibwvl9zd6ywz77b85xxn352i4oxsolbogozm16917j6dm3y6zukh6x6n3f74a0yb3ub8dl2u0y99mcizvltkx6r43x1r9su9qf4e08oee8019lxesgtwmu8ewrhc12dxfonrdr9t1mcb0bxj7tmf3d9h1k363ikabegihwuduzwf6kga8l34k6qkcmb8i78soodyj4ara1fmwjqsfsp4gbxlgjkehs8zkv6g0p2x151ijymu6nf9zvyvzqblc2sv39ndibkrtf94xvqo7u4rcghmy71lyb36mxi8ow1fc3nrvjd9750x1sf6ra5rzdeihg9ai1moh2od8s518roslyfl7uz8sr2c3ddcrmst3cv1p86i9zipjpn71nfxvovwnzjv8j5mf4c45zhvomms73oj0ldkmx8eqsnsplx9d1oz8zenebivpxkau3yx1az72qh8xibw2u1qxvpjpueez74o21m5h7819uk1p23xy17ch7u08w6hixbi964byp9rlkcuoo2e76nyoarzayb6ywfqx2n8zvpc1yhgx3jm6bd6cixq0z9bpxhwvn1w5fno3vfk5ygznogj00d618v == \f\8\v\4\k\f\j\k\p\m\h\8\u\0\z\n\x\d\3\q\c\d\f\g\d\v\e\p\3\d\6\g\p\k\5\r\3\7\w\1\1\h\i\s\t\c\p\q\5\a\m\y\9\x\c\6\f\7\w\o\3\w\q\v\q\k\a\m\f\x\y\z\j\g\5\j\j\s\m\c\d\9\n\z\b\6\v\m\b\u\h\c\9\j\x\c\q\i\5\i\y\y\f\x\u\j\a\i\1\1\v\l\p\o\b\4\a\f\r\v\0\g\g\y\v\h\m\2\b\8\6\d\t\1\0\l\o\a\g\n\d\0\d\m\6\t\c\1\t\o\t\9\t\y\a\e\c\d\1\b\r\c\j\9\6\2\2\z\0\c\h\v\9\6\g\y\w\9\e\q\1\0\2\h\g\q\z\w\f\9\d\3\e\n\s\6\r\6\9\5\e\3\w\h\x\g\o\m\d\i\9\u\u\s\s\x\c\z\u\o\0\l\v\t\7\l\j\e\k\p\n\8\s\r\b\e\v\y\l\x\p\7\y\d\z\q\u\o\w\t\8\f\z\3\h\f\b\m\r\j\w\i\j\g\b\0\p\p\5\v\y\6\w\t\y\j\4\i\q\e\g\u\5\i\d\m\8\g\8\5\r\8\j\g\v\1\t\a\0\y\y\p\f\w\i\k\e\c\u\o\d\y\o\m\n\m\n\a\4\i\s\1\1\i\l\c\x\0\i\g\e\8\w\m\4\a\o\r\a\n\q\s\1\s\e\c\o\z\2\n\6\9\c\7\d\0\t\j\n\n\u\c\t\5\q\v\u\6\o\0\i\v\8\y\0\e\d\o\b\8\z\e\1\o\y\i\7\g\1\o\0\3\t\2\m\b\c\s\9\u\a\6\o\b\s\j\0\m\s\j\f\e\y\w\5\9\m\e\i\f\a\4\i\p\t\y\w\u\7\i\b\w\v\l\9\z\d\6\y\w\z\7\7\b\8\5\x\x\n\3\5\2\i\4\o\x\s\o\l\b\o\g\o\z\m\1\6\9\1\7\j\6\d\m\3\y\6\z\u\k\h\6\x\6\n\3\f\7\4\a\0\y\b\3\u\b\8\d\l\2\u\0\y\9\9\m\c\i\z\v\l\t\k\x\6\r\4\3\x\1\r\9\s\u\9\q\f\4\e\0\8\o\e\e\8\0\1\9\l\x\e\s\g\t\w\m\u\8\e\w\r\h\c\1\2\d\x\f\o\n\r\d\r\9\t\1\m\c\b\0\b\x\j\7\t\m\f\3\d\9\h\1\k\3\6\3\i\k\a\b\e\g\i\h\w\u\d\u\z\w\f\6\k\g\a\8\l\3\4\k\6\q\k\c\m\b\8\i\7\8\s\o\o\d\y\j\4\a\r\a\1\f\m\w\j\q\s\f\s\p\4\g\b\x\l\g\j\k\e\h\s\8\z\k\v\6\g\0\p\2\x\1\5\1\i\j\y\m\u\6\n\f\9\z\v\y\v\z\q\b\l\c\2\s\v\3\9\n\d\i\b\k\r\t\f\9\4\x\v\q\o\7\u\4\r\c\g\h\m\y\7\1\l\y\b\3\6\m\x\i\8\o\w\1\f\c\3\n\r\v\j\d\9\7\5\0\x\1\s\f\6\r\a\5\r\z\d\e\i\h\g\9\a\i\1\m\o\h\2\o\d\8\s\5\1\8\r\o\s\l\y\f\l\7\u\z\8\s\r\2\c\3\d\d\c\r\m\s\t\3\c\v\1\p\8\6\i\9\z\i\p\j\p\n\7\1\n\f\x\v\o\v\w\n\z\j\v\8\j\5\m\f\4\c\4\5\z\h\v\o\m\m\s\7\3\o\j\0\l\d\k\m\x\8\e\q\s\n\s\p\l\x\9\d\1\o\z\8\z\e\n\e\b\i\v\p\x\k\a\u\3\y\x\1\a\z\7\2\q\h\8\x\i\b\w\2\u\1\q\x\v\p\j\p\u\e\e\z\7\4\o\2\1\m\5\h\7\8\1\9\u\k\1\p\2\3\x\y\1\7\c\h\7\u\0\8\w\6\h\i\x\b\i\9\6\4\b\y\p\9\r\l\k\c\u\o\o\2\e\7\6\n\y\o\a\r\z\a\y\b\6\y\w\f\q\x\2\n\8\z\v\p\c\1\y\h\g\x\3\j\m\6\b\d\6\c\i\x\q\0\z\9\b\p\x\h\w\v\n\1\w\5\f\n\o\3\v\f\k\5\y\g\z\n\o\g\j\0\0\d\6\1\8\v ]] 00:07:36.402 11:11:17 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:36.968 11:11:18 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:36.968 11:11:18 -- dd/uring.sh@75 -- # gen_conf 00:07:36.968 11:11:18 -- dd/common.sh@31 -- # xtrace_disable 00:07:36.968 11:11:18 -- common/autotest_common.sh@10 -- # set +x 00:07:36.968 [2024-10-13 11:11:18.400606] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:36.968 [2024-10-13 11:11:18.400700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59047 ] 00:07:36.968 { 00:07:36.968 "subsystems": [ 00:07:36.968 { 00:07:36.968 "subsystem": "bdev", 00:07:36.968 "config": [ 00:07:36.968 { 00:07:36.968 "params": { 00:07:36.968 "block_size": 512, 00:07:36.968 "num_blocks": 1048576, 00:07:36.968 "name": "malloc0" 00:07:36.968 }, 00:07:36.968 "method": "bdev_malloc_create" 00:07:36.968 }, 00:07:36.968 { 00:07:36.968 "params": { 00:07:36.968 "filename": "/dev/zram1", 00:07:36.968 "name": "uring0" 00:07:36.968 }, 00:07:36.968 "method": "bdev_uring_create" 00:07:36.968 }, 00:07:36.968 { 00:07:36.968 "method": "bdev_wait_for_examine" 00:07:36.968 } 00:07:36.968 ] 00:07:36.968 } 00:07:36.968 ] 00:07:36.968 } 00:07:36.968 [2024-10-13 11:11:18.536507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.227 [2024-10-13 11:11:18.589260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.171  [2024-10-13T11:11:21.149Z] Copying: 166/512 [MB] (166 MBps) [2024-10-13T11:11:22.085Z] Copying: 332/512 [MB] (166 MBps) [2024-10-13T11:11:22.085Z] Copying: 500/512 [MB] (167 MBps) [2024-10-13T11:11:22.085Z] Copying: 512/512 [MB] (average 166 MBps) 00:07:40.483 00:07:40.483 11:11:22 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:40.483 11:11:22 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:40.483 11:11:22 -- dd/uring.sh@87 -- # : 00:07:40.483 11:11:22 -- dd/uring.sh@87 -- # gen_conf 00:07:40.483 11:11:22 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:40.483 11:11:22 -- dd/common.sh@31 -- # xtrace_disable 00:07:40.483 11:11:22 -- common/autotest_common.sh@10 -- # set +x 00:07:40.483 11:11:22 -- dd/uring.sh@87 -- # : 00:07:40.742 [2024-10-13 11:11:22.113615] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:40.742 [2024-10-13 11:11:22.113714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59094 ] 00:07:40.742 { 00:07:40.742 "subsystems": [ 00:07:40.742 { 00:07:40.742 "subsystem": "bdev", 00:07:40.742 "config": [ 00:07:40.742 { 00:07:40.742 "params": { 00:07:40.742 "block_size": 512, 00:07:40.742 "num_blocks": 1048576, 00:07:40.742 "name": "malloc0" 00:07:40.742 }, 00:07:40.742 "method": "bdev_malloc_create" 00:07:40.742 }, 00:07:40.742 { 00:07:40.742 "params": { 00:07:40.742 "filename": "/dev/zram1", 00:07:40.742 "name": "uring0" 00:07:40.742 }, 00:07:40.742 "method": "bdev_uring_create" 00:07:40.742 }, 00:07:40.742 { 00:07:40.742 "params": { 00:07:40.742 "name": "uring0" 00:07:40.742 }, 00:07:40.742 "method": "bdev_uring_delete" 00:07:40.742 }, 00:07:40.742 { 00:07:40.742 "method": "bdev_wait_for_examine" 00:07:40.742 } 00:07:40.742 ] 00:07:40.742 } 00:07:40.742 ] 00:07:40.742 } 00:07:40.742 [2024-10-13 11:11:22.250848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.742 [2024-10-13 11:11:22.300769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.000  [2024-10-13T11:11:22.861Z] Copying: 0/0 [B] (average 0 Bps) 00:07:41.259 00:07:41.259 11:11:22 -- dd/uring.sh@94 -- # : 00:07:41.259 11:11:22 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:41.259 11:11:22 -- dd/uring.sh@94 -- # gen_conf 00:07:41.259 11:11:22 -- common/autotest_common.sh@640 -- # local es=0 00:07:41.259 11:11:22 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:41.259 11:11:22 -- dd/common.sh@31 -- # xtrace_disable 00:07:41.259 11:11:22 -- common/autotest_common.sh@10 -- # set +x 00:07:41.260 11:11:22 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.260 11:11:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:41.260 11:11:22 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.260 11:11:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:41.260 11:11:22 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.260 11:11:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:41.260 11:11:22 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.260 11:11:22 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.260 11:11:22 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:41.260 [2024-10-13 11:11:22.784387] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:41.260 [2024-10-13 11:11:22.784499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59120 ] 00:07:41.260 { 00:07:41.260 "subsystems": [ 00:07:41.260 { 00:07:41.260 "subsystem": "bdev", 00:07:41.260 "config": [ 00:07:41.260 { 00:07:41.260 "params": { 00:07:41.260 "block_size": 512, 00:07:41.260 "num_blocks": 1048576, 00:07:41.260 "name": "malloc0" 00:07:41.260 }, 00:07:41.260 "method": "bdev_malloc_create" 00:07:41.260 }, 00:07:41.260 { 00:07:41.260 "params": { 00:07:41.260 "filename": "/dev/zram1", 00:07:41.260 "name": "uring0" 00:07:41.260 }, 00:07:41.260 "method": "bdev_uring_create" 00:07:41.260 }, 00:07:41.260 { 00:07:41.260 "params": { 00:07:41.260 "name": "uring0" 00:07:41.260 }, 00:07:41.260 "method": "bdev_uring_delete" 00:07:41.260 }, 00:07:41.260 { 00:07:41.260 "method": "bdev_wait_for_examine" 00:07:41.260 } 00:07:41.260 ] 00:07:41.260 } 00:07:41.260 ] 00:07:41.260 } 00:07:41.519 [2024-10-13 11:11:22.920702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.519 [2024-10-13 11:11:22.973009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.778 [2024-10-13 11:11:23.123452] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:41.778 [2024-10-13 11:11:23.123496] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:41.778 [2024-10-13 11:11:23.123523] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:07:41.778 [2024-10-13 11:11:23.123532] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.778 [2024-10-13 11:11:23.287112] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:42.037 11:11:23 -- common/autotest_common.sh@643 -- # es=237 00:07:42.037 11:11:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:42.037 11:11:23 -- common/autotest_common.sh@652 -- # es=109 00:07:42.037 11:11:23 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:42.037 11:11:23 -- common/autotest_common.sh@660 -- # es=1 00:07:42.037 11:11:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:42.037 11:11:23 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:42.037 11:11:23 -- dd/common.sh@172 -- # local id=1 00:07:42.037 11:11:23 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:42.037 11:11:23 -- dd/common.sh@176 -- # echo 1 00:07:42.037 11:11:23 -- dd/common.sh@177 -- # echo 1 00:07:42.037 11:11:23 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:42.296 00:07:42.296 real 0m14.064s 00:07:42.296 user 0m7.938s 00:07:42.296 sys 0m5.418s 00:07:42.296 11:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.296 ************************************ 00:07:42.296 END TEST dd_uring_copy 00:07:42.296 ************************************ 00:07:42.296 11:11:23 -- common/autotest_common.sh@10 -- # set +x 00:07:42.296 00:07:42.296 real 0m14.199s 00:07:42.296 user 0m7.986s 00:07:42.296 sys 0m5.505s 00:07:42.296 11:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.296 11:11:23 -- common/autotest_common.sh@10 -- # set +x 00:07:42.296 ************************************ 00:07:42.296 END TEST spdk_dd_uring 00:07:42.296 ************************************ 00:07:42.296 11:11:23 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:42.296 11:11:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:42.296 11:11:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.296 11:11:23 -- common/autotest_common.sh@10 -- # set +x 00:07:42.296 ************************************ 00:07:42.296 START TEST spdk_dd_sparse 00:07:42.296 ************************************ 00:07:42.296 11:11:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:42.296 * Looking for test storage... 00:07:42.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:42.296 11:11:23 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:42.296 11:11:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.296 11:11:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.296 11:11:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.296 11:11:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.296 11:11:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.296 11:11:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.296 11:11:23 -- paths/export.sh@5 -- # export PATH 00:07:42.296 11:11:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.296 11:11:23 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:42.296 11:11:23 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:42.296 11:11:23 -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:42.296 11:11:23 -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:42.296 11:11:23 -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:42.296 11:11:23 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:42.296 11:11:23 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:42.296 11:11:23 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:42.296 11:11:23 -- dd/sparse.sh@118 -- # prepare 00:07:42.296 11:11:23 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:42.296 11:11:23 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:42.296 1+0 records in 00:07:42.296 1+0 records out 00:07:42.296 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00581922 s, 721 MB/s 00:07:42.296 11:11:23 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:42.296 1+0 records in 00:07:42.296 1+0 records out 00:07:42.296 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00612276 s, 685 MB/s 00:07:42.296 11:11:23 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:42.296 1+0 records in 00:07:42.296 1+0 records out 00:07:42.296 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00407163 s, 1.0 GB/s 00:07:42.296 11:11:23 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:42.296 11:11:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:42.296 11:11:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.296 11:11:23 -- common/autotest_common.sh@10 -- # set +x 00:07:42.296 ************************************ 00:07:42.296 START TEST dd_sparse_file_to_file 00:07:42.296 ************************************ 00:07:42.296 11:11:23 -- common/autotest_common.sh@1104 -- # file_to_file 00:07:42.296 11:11:23 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:42.296 11:11:23 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:42.296 11:11:23 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:42.296 11:11:23 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:42.296 11:11:23 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:42.296 11:11:23 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:42.296 11:11:23 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:42.296 11:11:23 -- dd/sparse.sh@41 -- # gen_conf 00:07:42.296 11:11:23 -- dd/common.sh@31 -- # xtrace_disable 00:07:42.296 11:11:23 -- common/autotest_common.sh@10 -- # set +x 00:07:42.555 [2024-10-13 11:11:23.927061] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:42.555 [2024-10-13 11:11:23.927179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59205 ] 00:07:42.555 { 00:07:42.555 "subsystems": [ 00:07:42.555 { 00:07:42.555 "subsystem": "bdev", 00:07:42.555 "config": [ 00:07:42.555 { 00:07:42.555 "params": { 00:07:42.555 "block_size": 4096, 00:07:42.555 "filename": "dd_sparse_aio_disk", 00:07:42.555 "name": "dd_aio" 00:07:42.555 }, 00:07:42.555 "method": "bdev_aio_create" 00:07:42.555 }, 00:07:42.555 { 00:07:42.555 "params": { 00:07:42.555 "lvs_name": "dd_lvstore", 00:07:42.555 "bdev_name": "dd_aio" 00:07:42.555 }, 00:07:42.555 "method": "bdev_lvol_create_lvstore" 00:07:42.555 }, 00:07:42.555 { 00:07:42.555 "method": "bdev_wait_for_examine" 00:07:42.555 } 00:07:42.555 ] 00:07:42.555 } 00:07:42.555 ] 00:07:42.555 } 00:07:42.555 [2024-10-13 11:11:24.064015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.555 [2024-10-13 11:11:24.116001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.814  [2024-10-13T11:11:24.675Z] Copying: 12/36 [MB] (average 1714 MBps) 00:07:43.073 00:07:43.073 11:11:24 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:43.073 11:11:24 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:43.073 11:11:24 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:43.073 11:11:24 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:43.073 11:11:24 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:43.073 11:11:24 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:43.073 11:11:24 -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:43.073 11:11:24 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:43.073 11:11:24 -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:43.073 11:11:24 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:43.073 00:07:43.073 real 0m0.568s 00:07:43.073 user 0m0.351s 00:07:43.073 sys 0m0.125s 00:07:43.073 11:11:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.073 11:11:24 -- common/autotest_common.sh@10 -- # set +x 00:07:43.073 ************************************ 00:07:43.073 END TEST dd_sparse_file_to_file 00:07:43.073 ************************************ 00:07:43.073 11:11:24 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:43.073 11:11:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.073 11:11:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.073 11:11:24 -- common/autotest_common.sh@10 -- # set +x 00:07:43.073 ************************************ 00:07:43.073 START TEST dd_sparse_file_to_bdev 00:07:43.073 ************************************ 00:07:43.073 11:11:24 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:07:43.073 11:11:24 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:43.073 11:11:24 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:43.073 11:11:24 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:07:43.073 11:11:24 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:43.073 11:11:24 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:43.073 11:11:24 -- dd/sparse.sh@73 -- # gen_conf 00:07:43.073 11:11:24 -- dd/common.sh@31 -- # xtrace_disable 00:07:43.073 11:11:24 -- common/autotest_common.sh@10 -- # set +x 00:07:43.073 [2024-10-13 11:11:24.544004] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:43.073 [2024-10-13 11:11:24.544098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59251 ] 00:07:43.073 { 00:07:43.073 "subsystems": [ 00:07:43.073 { 00:07:43.073 "subsystem": "bdev", 00:07:43.073 "config": [ 00:07:43.073 { 00:07:43.073 "params": { 00:07:43.073 "block_size": 4096, 00:07:43.073 "filename": "dd_sparse_aio_disk", 00:07:43.073 "name": "dd_aio" 00:07:43.073 }, 00:07:43.073 "method": "bdev_aio_create" 00:07:43.073 }, 00:07:43.073 { 00:07:43.073 "params": { 00:07:43.073 "lvs_name": "dd_lvstore", 00:07:43.073 "lvol_name": "dd_lvol", 00:07:43.073 "size": 37748736, 00:07:43.073 "thin_provision": true 00:07:43.073 }, 00:07:43.073 "method": "bdev_lvol_create" 00:07:43.073 }, 00:07:43.073 { 00:07:43.073 "method": "bdev_wait_for_examine" 00:07:43.073 } 00:07:43.073 ] 00:07:43.073 } 00:07:43.073 ] 00:07:43.073 } 00:07:43.332 [2024-10-13 11:11:24.681156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.332 [2024-10-13 11:11:24.735662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.332 [2024-10-13 11:11:24.790829] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:07:43.332  [2024-10-13T11:11:24.934Z] Copying: 12/36 [MB] (average 342 MBps)[2024-10-13 11:11:24.841536] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:07:43.594 00:07:43.594 00:07:43.594 00:07:43.594 real 0m0.550s 00:07:43.594 user 0m0.368s 00:07:43.594 sys 0m0.107s 00:07:43.594 11:11:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.594 ************************************ 00:07:43.594 END TEST dd_sparse_file_to_bdev 00:07:43.594 ************************************ 00:07:43.594 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:07:43.594 11:11:25 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:43.594 11:11:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.594 11:11:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.594 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:07:43.594 ************************************ 00:07:43.594 START TEST dd_sparse_bdev_to_file 00:07:43.594 ************************************ 00:07:43.594 11:11:25 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:07:43.594 11:11:25 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:43.594 11:11:25 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:43.594 11:11:25 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:43.594 11:11:25 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:43.594 11:11:25 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:43.594 11:11:25 -- dd/sparse.sh@91 -- # gen_conf 00:07:43.594 11:11:25 -- dd/common.sh@31 -- # xtrace_disable 00:07:43.594 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:07:43.594 [2024-10-13 11:11:25.149302] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:43.594 [2024-10-13 11:11:25.149414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59277 ] 00:07:43.594 { 00:07:43.594 "subsystems": [ 00:07:43.594 { 00:07:43.594 "subsystem": "bdev", 00:07:43.594 "config": [ 00:07:43.594 { 00:07:43.594 "params": { 00:07:43.594 "block_size": 4096, 00:07:43.594 "filename": "dd_sparse_aio_disk", 00:07:43.594 "name": "dd_aio" 00:07:43.594 }, 00:07:43.594 "method": "bdev_aio_create" 00:07:43.594 }, 00:07:43.594 { 00:07:43.594 "method": "bdev_wait_for_examine" 00:07:43.594 } 00:07:43.594 ] 00:07:43.594 } 00:07:43.594 ] 00:07:43.594 } 00:07:43.862 [2024-10-13 11:11:25.283210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.862 [2024-10-13 11:11:25.333593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.862  [2024-10-13T11:11:25.723Z] Copying: 12/36 [MB] (average 1333 MBps) 00:07:44.121 00:07:44.121 11:11:25 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:44.121 11:11:25 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:44.121 11:11:25 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:44.121 11:11:25 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:44.121 11:11:25 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:44.121 11:11:25 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:44.121 11:11:25 -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:44.121 11:11:25 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:44.121 11:11:25 -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:44.121 11:11:25 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:44.121 00:07:44.121 real 0m0.553s 00:07:44.121 user 0m0.341s 00:07:44.121 sys 0m0.138s 00:07:44.121 11:11:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.121 ************************************ 00:07:44.121 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:07:44.121 END TEST dd_sparse_bdev_to_file 00:07:44.121 ************************************ 00:07:44.121 11:11:25 -- dd/sparse.sh@1 -- # cleanup 00:07:44.121 11:11:25 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:44.121 11:11:25 -- dd/sparse.sh@12 -- # rm file_zero1 00:07:44.121 11:11:25 -- dd/sparse.sh@13 -- # rm file_zero2 00:07:44.121 11:11:25 -- dd/sparse.sh@14 -- # rm file_zero3 00:07:44.121 00:07:44.121 real 0m1.968s 00:07:44.121 user 0m1.150s 00:07:44.121 sys 0m0.568s 00:07:44.121 11:11:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.379 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:07:44.379 ************************************ 00:07:44.379 END TEST spdk_dd_sparse 00:07:44.379 ************************************ 00:07:44.379 11:11:25 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:44.379 11:11:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.379 11:11:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.379 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:07:44.379 ************************************ 00:07:44.379 START TEST spdk_dd_negative 00:07:44.379 ************************************ 00:07:44.379 11:11:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:44.379 * Looking for test storage... 00:07:44.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:44.379 11:11:25 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:44.379 11:11:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.379 11:11:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.379 11:11:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.379 11:11:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.379 11:11:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.379 11:11:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.379 11:11:25 -- paths/export.sh@5 -- # export PATH 00:07:44.379 11:11:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.379 11:11:25 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.379 11:11:25 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.379 11:11:25 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.379 11:11:25 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.379 11:11:25 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:44.379 11:11:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.379 11:11:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.379 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:07:44.379 ************************************ 00:07:44.379 START TEST dd_invalid_arguments 00:07:44.379 ************************************ 00:07:44.379 11:11:25 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:07:44.379 11:11:25 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:44.379 11:11:25 -- common/autotest_common.sh@640 -- # local es=0 00:07:44.379 11:11:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:44.379 11:11:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.379 11:11:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.379 11:11:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.379 11:11:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.379 11:11:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.379 11:11:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.379 11:11:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.379 11:11:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.379 11:11:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:44.379 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:44.379 options: 00:07:44.379 -c, --config JSON config file (default none) 00:07:44.379 --json JSON config file (default none) 00:07:44.379 --json-ignore-init-errors 00:07:44.379 don't exit on invalid config entry 00:07:44.379 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:44.379 -g, --single-file-segments 00:07:44.379 force creating just one hugetlbfs file 00:07:44.379 -h, --help show this usage 00:07:44.379 -i, --shm-id shared memory ID (optional) 00:07:44.379 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:44.379 --lcores lcore to CPU mapping list. The list is in the format: 00:07:44.379 [<,lcores[@CPUs]>...] 00:07:44.379 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:44.379 Within the group, '-' is used for range separator, 00:07:44.379 ',' is used for single number separator. 00:07:44.379 '( )' can be omitted for single element group, 00:07:44.379 '@' can be omitted if cpus and lcores have the same value 00:07:44.379 -n, --mem-channels channel number of memory channels used for DPDK 00:07:44.379 -p, --main-core main (primary) core for DPDK 00:07:44.379 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:44.379 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:44.379 --disable-cpumask-locks Disable CPU core lock files. 00:07:44.379 --silence-noticelog disable notice level logging to stderr 00:07:44.379 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:44.379 -u, --no-pci disable PCI access 00:07:44.379 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:44.379 --max-delay maximum reactor delay (in microseconds) 00:07:44.379 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:44.379 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:44.379 -R, --huge-unlink unlink huge files after initialization 00:07:44.380 -v, --version print SPDK version 00:07:44.380 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:44.380 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:44.380 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:44.380 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:44.380 Tracepoints vary in size and can use more than one trace entry. 00:07:44.380 --rpcs-allowed comma-separated list of permitted RPCS 00:07:44.380 --env-context Opaque context for use of the env implementation 00:07:44.380 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:44.380 --no-huge run without using hugepages 00:07:44.380 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:44.380 -e, --tpoint-group [:] 00:07:44.380 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:07:44.380 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:44.380 [2024-10-13 11:11:25.930216] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:07:44.380 enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:44.380 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:44.380 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:44.380 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:44.380 [--------- DD Options ---------] 00:07:44.380 --if Input file. Must specify either --if or --ib. 00:07:44.380 --ib Input bdev. Must specifier either --if or --ib 00:07:44.380 --of Output file. Must specify either --of or --ob. 00:07:44.380 --ob Output bdev. Must specify either --of or --ob. 00:07:44.380 --iflag Input file flags. 00:07:44.380 --oflag Output file flags. 00:07:44.380 --bs I/O unit size (default: 4096) 00:07:44.380 --qd Queue depth (default: 2) 00:07:44.380 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:44.380 --skip Skip this many I/O units at start of input. (default: 0) 00:07:44.380 --seek Skip this many I/O units at start of output. (default: 0) 00:07:44.380 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:44.380 --sparse Enable hole skipping in input target 00:07:44.380 Available iflag and oflag values: 00:07:44.380 append - append mode 00:07:44.380 direct - use direct I/O for data 00:07:44.380 directory - fail unless a directory 00:07:44.380 dsync - use synchronized I/O for data 00:07:44.380 noatime - do not update access time 00:07:44.380 noctty - do not assign controlling terminal from file 00:07:44.380 nofollow - do not follow symlinks 00:07:44.380 nonblock - use non-blocking I/O 00:07:44.380 sync - use synchronized I/O for data and metadata 00:07:44.380 11:11:25 -- common/autotest_common.sh@643 -- # es=2 00:07:44.380 11:11:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:44.380 11:11:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:44.380 11:11:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:44.380 00:07:44.380 real 0m0.074s 00:07:44.380 user 0m0.045s 00:07:44.380 sys 0m0.027s 00:07:44.380 11:11:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.380 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:07:44.380 ************************************ 00:07:44.380 END TEST dd_invalid_arguments 00:07:44.380 ************************************ 00:07:44.638 11:11:25 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:44.638 11:11:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.638 11:11:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.638 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:07:44.638 ************************************ 00:07:44.638 START TEST dd_double_input 00:07:44.638 ************************************ 00:07:44.638 11:11:26 -- common/autotest_common.sh@1104 -- # double_input 00:07:44.638 11:11:26 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:44.638 11:11:26 -- common/autotest_common.sh@640 -- # local es=0 00:07:44.639 11:11:26 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:44.639 11:11:26 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.639 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.639 11:11:26 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.639 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.639 11:11:26 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.639 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.639 11:11:26 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.639 11:11:26 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.639 11:11:26 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:44.639 [2024-10-13 11:11:26.056564] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:44.639 11:11:26 -- common/autotest_common.sh@643 -- # es=22 00:07:44.639 11:11:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:44.639 11:11:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:44.639 11:11:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:44.639 00:07:44.639 real 0m0.074s 00:07:44.639 user 0m0.048s 00:07:44.639 sys 0m0.025s 00:07:44.639 11:11:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.639 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:44.639 ************************************ 00:07:44.639 END TEST dd_double_input 00:07:44.639 ************************************ 00:07:44.639 11:11:26 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:44.639 11:11:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.639 11:11:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.639 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:44.639 ************************************ 00:07:44.639 START TEST dd_double_output 00:07:44.639 ************************************ 00:07:44.639 11:11:26 -- common/autotest_common.sh@1104 -- # double_output 00:07:44.639 11:11:26 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:44.639 11:11:26 -- common/autotest_common.sh@640 -- # local es=0 00:07:44.639 11:11:26 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:44.639 11:11:26 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.639 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.639 11:11:26 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.639 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.639 11:11:26 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.639 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.639 11:11:26 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.639 11:11:26 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.639 11:11:26 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:44.639 [2024-10-13 11:11:26.186607] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:44.639 11:11:26 -- common/autotest_common.sh@643 -- # es=22 00:07:44.639 11:11:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:44.639 11:11:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:44.639 11:11:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:44.639 00:07:44.639 real 0m0.073s 00:07:44.639 user 0m0.051s 00:07:44.639 sys 0m0.021s 00:07:44.639 11:11:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.639 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:44.639 ************************************ 00:07:44.639 END TEST dd_double_output 00:07:44.639 ************************************ 00:07:44.898 11:11:26 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:44.898 11:11:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.898 11:11:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.898 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:44.898 ************************************ 00:07:44.898 START TEST dd_no_input 00:07:44.898 ************************************ 00:07:44.898 11:11:26 -- common/autotest_common.sh@1104 -- # no_input 00:07:44.898 11:11:26 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:44.898 11:11:26 -- common/autotest_common.sh@640 -- # local es=0 00:07:44.898 11:11:26 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:44.898 11:11:26 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.898 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.898 11:11:26 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.898 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.898 11:11:26 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.899 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.899 11:11:26 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.899 11:11:26 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.899 11:11:26 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:44.899 [2024-10-13 11:11:26.312721] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:07:44.899 11:11:26 -- common/autotest_common.sh@643 -- # es=22 00:07:44.899 11:11:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:44.899 11:11:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:44.899 11:11:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:44.899 00:07:44.899 real 0m0.074s 00:07:44.899 user 0m0.040s 00:07:44.899 sys 0m0.033s 00:07:44.899 11:11:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.899 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:44.899 ************************************ 00:07:44.899 END TEST dd_no_input 00:07:44.899 ************************************ 00:07:44.899 11:11:26 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:44.899 11:11:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.899 11:11:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.899 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:44.899 ************************************ 00:07:44.899 START TEST dd_no_output 00:07:44.899 ************************************ 00:07:44.899 11:11:26 -- common/autotest_common.sh@1104 -- # no_output 00:07:44.899 11:11:26 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.899 11:11:26 -- common/autotest_common.sh@640 -- # local es=0 00:07:44.899 11:11:26 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.899 11:11:26 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.899 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.899 11:11:26 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.899 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.899 11:11:26 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.899 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.899 11:11:26 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.899 11:11:26 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.899 11:11:26 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.899 [2024-10-13 11:11:26.434781] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:07:44.899 11:11:26 -- common/autotest_common.sh@643 -- # es=22 00:07:44.899 11:11:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:44.899 11:11:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:44.899 11:11:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:44.899 00:07:44.899 real 0m0.071s 00:07:44.899 user 0m0.047s 00:07:44.899 sys 0m0.023s 00:07:44.899 11:11:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.899 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:44.899 ************************************ 00:07:44.899 END TEST dd_no_output 00:07:44.899 ************************************ 00:07:44.899 11:11:26 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:44.899 11:11:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.899 11:11:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.899 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:45.158 ************************************ 00:07:45.158 START TEST dd_wrong_blocksize 00:07:45.158 ************************************ 00:07:45.158 11:11:26 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:07:45.158 11:11:26 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:45.158 11:11:26 -- common/autotest_common.sh@640 -- # local es=0 00:07:45.158 11:11:26 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:45.158 11:11:26 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.158 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.158 11:11:26 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.158 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.158 11:11:26 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.158 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.158 11:11:26 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.158 11:11:26 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.158 11:11:26 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:45.158 [2024-10-13 11:11:26.559246] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:07:45.158 11:11:26 -- common/autotest_common.sh@643 -- # es=22 00:07:45.158 11:11:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:45.158 11:11:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:45.158 11:11:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:45.158 00:07:45.158 real 0m0.069s 00:07:45.158 user 0m0.046s 00:07:45.158 sys 0m0.021s 00:07:45.158 11:11:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.158 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:45.158 ************************************ 00:07:45.158 END TEST dd_wrong_blocksize 00:07:45.158 ************************************ 00:07:45.158 11:11:26 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:45.158 11:11:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.158 11:11:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.158 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:45.158 ************************************ 00:07:45.158 START TEST dd_smaller_blocksize 00:07:45.158 ************************************ 00:07:45.158 11:11:26 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:07:45.158 11:11:26 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:45.158 11:11:26 -- common/autotest_common.sh@640 -- # local es=0 00:07:45.158 11:11:26 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:45.158 11:11:26 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.158 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.158 11:11:26 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.158 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.158 11:11:26 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.158 11:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.158 11:11:26 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.158 11:11:26 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.158 11:11:26 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:45.158 [2024-10-13 11:11:26.683732] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:45.158 [2024-10-13 11:11:26.683818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59498 ] 00:07:45.417 [2024-10-13 11:11:26.822590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.417 [2024-10-13 11:11:26.892949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.676 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:45.676 [2024-10-13 11:11:27.222312] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:45.676 [2024-10-13 11:11:27.222395] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.935 [2024-10-13 11:11:27.287882] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:45.935 11:11:27 -- common/autotest_common.sh@643 -- # es=244 00:07:45.935 11:11:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:45.935 11:11:27 -- common/autotest_common.sh@652 -- # es=116 00:07:45.935 11:11:27 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:45.935 11:11:27 -- common/autotest_common.sh@660 -- # es=1 00:07:45.935 11:11:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:45.935 00:07:45.935 real 0m0.759s 00:07:45.935 user 0m0.338s 00:07:45.935 sys 0m0.315s 00:07:45.935 11:11:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.935 11:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:45.935 ************************************ 00:07:45.935 END TEST dd_smaller_blocksize 00:07:45.935 ************************************ 00:07:45.935 11:11:27 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:45.935 11:11:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.935 11:11:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.935 11:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:45.935 ************************************ 00:07:45.935 START TEST dd_invalid_count 00:07:45.935 ************************************ 00:07:45.935 11:11:27 -- common/autotest_common.sh@1104 -- # invalid_count 00:07:45.935 11:11:27 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:45.935 11:11:27 -- common/autotest_common.sh@640 -- # local es=0 00:07:45.935 11:11:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:45.935 11:11:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.935 11:11:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.935 11:11:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.935 11:11:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.935 11:11:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.935 11:11:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.935 11:11:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.935 11:11:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.935 11:11:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:45.935 [2024-10-13 11:11:27.491157] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:07:45.935 11:11:27 -- common/autotest_common.sh@643 -- # es=22 00:07:45.935 11:11:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:45.935 11:11:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:45.935 11:11:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:45.935 00:07:45.935 real 0m0.071s 00:07:45.935 user 0m0.042s 00:07:45.935 sys 0m0.028s 00:07:45.935 11:11:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.935 11:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:45.935 ************************************ 00:07:45.935 END TEST dd_invalid_count 00:07:45.935 ************************************ 00:07:46.195 11:11:27 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:46.195 11:11:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:46.195 11:11:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.195 11:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.195 ************************************ 00:07:46.195 START TEST dd_invalid_oflag 00:07:46.195 ************************************ 00:07:46.195 11:11:27 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:07:46.195 11:11:27 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:46.195 11:11:27 -- common/autotest_common.sh@640 -- # local es=0 00:07:46.195 11:11:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:46.195 11:11:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.195 11:11:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.195 11:11:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.195 11:11:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.195 11:11:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.195 11:11:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.195 11:11:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.195 11:11:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.195 11:11:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:46.195 [2024-10-13 11:11:27.615756] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:07:46.195 11:11:27 -- common/autotest_common.sh@643 -- # es=22 00:07:46.195 11:11:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:46.195 11:11:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:46.195 11:11:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:46.195 00:07:46.195 real 0m0.072s 00:07:46.195 user 0m0.041s 00:07:46.195 sys 0m0.029s 00:07:46.195 11:11:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.195 11:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.195 ************************************ 00:07:46.195 END TEST dd_invalid_oflag 00:07:46.195 ************************************ 00:07:46.195 11:11:27 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:46.195 11:11:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:46.195 11:11:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.195 11:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.195 ************************************ 00:07:46.195 START TEST dd_invalid_iflag 00:07:46.195 ************************************ 00:07:46.195 11:11:27 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:07:46.195 11:11:27 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:46.195 11:11:27 -- common/autotest_common.sh@640 -- # local es=0 00:07:46.195 11:11:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:46.195 11:11:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.195 11:11:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.195 11:11:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.195 11:11:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.195 11:11:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.195 11:11:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.195 11:11:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.195 11:11:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.195 11:11:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:46.195 [2024-10-13 11:11:27.737717] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:07:46.195 11:11:27 -- common/autotest_common.sh@643 -- # es=22 00:07:46.195 11:11:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:46.195 11:11:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:46.195 11:11:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:46.195 00:07:46.195 real 0m0.071s 00:07:46.195 user 0m0.042s 00:07:46.195 sys 0m0.027s 00:07:46.195 11:11:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.195 11:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.195 ************************************ 00:07:46.195 END TEST dd_invalid_iflag 00:07:46.195 ************************************ 00:07:46.454 11:11:27 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:46.454 11:11:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:46.454 11:11:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.454 11:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.454 ************************************ 00:07:46.454 START TEST dd_unknown_flag 00:07:46.454 ************************************ 00:07:46.454 11:11:27 -- common/autotest_common.sh@1104 -- # unknown_flag 00:07:46.454 11:11:27 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:46.454 11:11:27 -- common/autotest_common.sh@640 -- # local es=0 00:07:46.454 11:11:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:46.454 11:11:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.454 11:11:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.454 11:11:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.454 11:11:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.454 11:11:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.454 11:11:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.454 11:11:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.454 11:11:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.454 11:11:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:46.455 [2024-10-13 11:11:27.859980] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:46.455 [2024-10-13 11:11:27.860077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59590 ] 00:07:46.455 [2024-10-13 11:11:27.996100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.455 [2024-10-13 11:11:28.043584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.714 [2024-10-13 11:11:28.091382] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:07:46.714 [2024-10-13 11:11:28.091460] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:07:46.714 [2024-10-13 11:11:28.091471] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:07:46.714 [2024-10-13 11:11:28.091481] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.714 [2024-10-13 11:11:28.150549] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:46.714 11:11:28 -- common/autotest_common.sh@643 -- # es=236 00:07:46.714 11:11:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:46.714 11:11:28 -- common/autotest_common.sh@652 -- # es=108 00:07:46.714 11:11:28 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:46.714 11:11:28 -- common/autotest_common.sh@660 -- # es=1 00:07:46.714 11:11:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:46.714 00:07:46.714 real 0m0.442s 00:07:46.714 user 0m0.245s 00:07:46.714 sys 0m0.094s 00:07:46.714 11:11:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.714 ************************************ 00:07:46.714 END TEST dd_unknown_flag 00:07:46.714 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:07:46.714 ************************************ 00:07:46.714 11:11:28 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:46.714 11:11:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:46.714 11:11:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.714 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:07:46.714 ************************************ 00:07:46.714 START TEST dd_invalid_json 00:07:46.714 ************************************ 00:07:46.714 11:11:28 -- common/autotest_common.sh@1104 -- # invalid_json 00:07:46.714 11:11:28 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:46.714 11:11:28 -- common/autotest_common.sh@640 -- # local es=0 00:07:46.714 11:11:28 -- dd/negative_dd.sh@95 -- # : 00:07:46.714 11:11:28 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:46.714 11:11:28 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.714 11:11:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.714 11:11:28 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.714 11:11:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.714 11:11:28 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.714 11:11:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.714 11:11:28 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.714 11:11:28 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.714 11:11:28 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:46.973 [2024-10-13 11:11:28.357916] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:46.973 [2024-10-13 11:11:28.358016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59618 ] 00:07:46.973 [2024-10-13 11:11:28.496134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.973 [2024-10-13 11:11:28.560225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.973 [2024-10-13 11:11:28.560394] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:07:46.973 [2024-10-13 11:11:28.560415] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.973 [2024-10-13 11:11:28.560469] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:47.232 11:11:28 -- common/autotest_common.sh@643 -- # es=234 00:07:47.232 11:11:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:47.232 11:11:28 -- common/autotest_common.sh@652 -- # es=106 00:07:47.232 11:11:28 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:47.232 11:11:28 -- common/autotest_common.sh@660 -- # es=1 00:07:47.232 11:11:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:47.232 00:07:47.232 real 0m0.370s 00:07:47.232 user 0m0.202s 00:07:47.232 sys 0m0.066s 00:07:47.232 11:11:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.232 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:07:47.232 ************************************ 00:07:47.232 END TEST dd_invalid_json 00:07:47.232 ************************************ 00:07:47.232 00:07:47.232 real 0m2.943s 00:07:47.232 user 0m1.422s 00:07:47.232 sys 0m1.158s 00:07:47.232 11:11:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.232 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:07:47.232 ************************************ 00:07:47.232 END TEST spdk_dd_negative 00:07:47.232 ************************************ 00:07:47.232 00:07:47.232 real 1m5.457s 00:07:47.232 user 0m40.758s 00:07:47.232 sys 0m15.466s 00:07:47.232 11:11:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.232 ************************************ 00:07:47.232 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:07:47.232 END TEST spdk_dd 00:07:47.232 ************************************ 00:07:47.232 11:11:28 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:47.232 11:11:28 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:47.232 11:11:28 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:47.232 11:11:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:47.232 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:07:47.491 11:11:28 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:47.491 11:11:28 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:47.491 11:11:28 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:47.491 11:11:28 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:47.491 11:11:28 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:47.491 11:11:28 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:47.491 11:11:28 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:47.491 11:11:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:47.491 11:11:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.491 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:07:47.491 ************************************ 00:07:47.491 START TEST nvmf_tcp 00:07:47.491 ************************************ 00:07:47.491 11:11:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:47.491 * Looking for test storage... 00:07:47.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:47.491 11:11:28 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:47.491 11:11:28 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:47.491 11:11:28 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:47.491 11:11:28 -- nvmf/common.sh@7 -- # uname -s 00:07:47.491 11:11:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.491 11:11:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.491 11:11:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.491 11:11:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.491 11:11:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.491 11:11:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.491 11:11:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.491 11:11:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.491 11:11:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.491 11:11:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.491 11:11:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:07:47.491 11:11:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:07:47.491 11:11:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.491 11:11:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.491 11:11:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:47.491 11:11:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.491 11:11:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.491 11:11:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.491 11:11:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.492 11:11:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.492 11:11:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.492 11:11:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.492 11:11:28 -- paths/export.sh@5 -- # export PATH 00:07:47.492 11:11:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.492 11:11:28 -- nvmf/common.sh@46 -- # : 0 00:07:47.492 11:11:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:47.492 11:11:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:47.492 11:11:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:47.492 11:11:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.492 11:11:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.492 11:11:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:47.492 11:11:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:47.492 11:11:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:47.492 11:11:28 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:47.492 11:11:28 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:47.492 11:11:28 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:47.492 11:11:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:47.492 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:07:47.492 11:11:28 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:47.492 11:11:28 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:47.492 11:11:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:47.492 11:11:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.492 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:07:47.492 ************************************ 00:07:47.492 START TEST nvmf_host_management 00:07:47.492 ************************************ 00:07:47.492 11:11:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:47.492 * Looking for test storage... 00:07:47.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:47.492 11:11:29 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:47.492 11:11:29 -- nvmf/common.sh@7 -- # uname -s 00:07:47.492 11:11:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.492 11:11:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.492 11:11:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.492 11:11:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.492 11:11:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.492 11:11:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.492 11:11:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.492 11:11:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.492 11:11:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.492 11:11:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.492 11:11:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:07:47.492 11:11:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:07:47.492 11:11:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.492 11:11:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.492 11:11:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:47.492 11:11:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.492 11:11:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.492 11:11:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.492 11:11:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.492 11:11:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.492 11:11:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.492 11:11:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.492 11:11:29 -- paths/export.sh@5 -- # export PATH 00:07:47.492 11:11:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.492 11:11:29 -- nvmf/common.sh@46 -- # : 0 00:07:47.492 11:11:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:47.492 11:11:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:47.492 11:11:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:47.492 11:11:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.492 11:11:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.492 11:11:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:47.492 11:11:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:47.492 11:11:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:47.492 11:11:29 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:47.492 11:11:29 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:47.492 11:11:29 -- target/host_management.sh@104 -- # nvmftestinit 00:07:47.492 11:11:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:47.492 11:11:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.492 11:11:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:47.492 11:11:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:47.492 11:11:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:47.492 11:11:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.492 11:11:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.492 11:11:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.492 11:11:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:47.492 11:11:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:47.492 11:11:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:47.492 11:11:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:47.492 11:11:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:47.492 11:11:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:47.492 11:11:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.492 11:11:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.492 11:11:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:47.492 11:11:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:47.492 11:11:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:47.492 11:11:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:47.492 11:11:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:47.492 11:11:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.492 11:11:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:47.492 11:11:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:47.492 11:11:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:47.492 11:11:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:47.492 11:11:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:47.751 Cannot find device "nvmf_init_br" 00:07:47.751 11:11:29 -- nvmf/common.sh@153 -- # true 00:07:47.751 11:11:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:47.751 Cannot find device "nvmf_tgt_br" 00:07:47.751 11:11:29 -- nvmf/common.sh@154 -- # true 00:07:47.751 11:11:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:47.751 Cannot find device "nvmf_tgt_br2" 00:07:47.751 11:11:29 -- nvmf/common.sh@155 -- # true 00:07:47.751 11:11:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:47.751 Cannot find device "nvmf_init_br" 00:07:47.751 11:11:29 -- nvmf/common.sh@156 -- # true 00:07:47.751 11:11:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:47.751 Cannot find device "nvmf_tgt_br" 00:07:47.751 11:11:29 -- nvmf/common.sh@157 -- # true 00:07:47.751 11:11:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:47.751 Cannot find device "nvmf_tgt_br2" 00:07:47.751 11:11:29 -- nvmf/common.sh@158 -- # true 00:07:47.751 11:11:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:47.751 Cannot find device "nvmf_br" 00:07:47.751 11:11:29 -- nvmf/common.sh@159 -- # true 00:07:47.751 11:11:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:47.751 Cannot find device "nvmf_init_if" 00:07:47.751 11:11:29 -- nvmf/common.sh@160 -- # true 00:07:47.751 11:11:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:47.751 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.751 11:11:29 -- nvmf/common.sh@161 -- # true 00:07:47.751 11:11:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:47.751 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.751 11:11:29 -- nvmf/common.sh@162 -- # true 00:07:47.751 11:11:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:47.751 11:11:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:47.751 11:11:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:47.751 11:11:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:47.751 11:11:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:47.751 11:11:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:47.751 11:11:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:47.751 11:11:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:47.751 11:11:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:47.751 11:11:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:47.751 11:11:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:47.751 11:11:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:47.751 11:11:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:47.751 11:11:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:47.751 11:11:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:47.751 11:11:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:47.751 11:11:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:48.010 11:11:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:48.010 11:11:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:48.010 11:11:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:48.010 11:11:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:48.010 11:11:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:48.010 11:11:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:48.010 11:11:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:48.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:07:48.010 00:07:48.010 --- 10.0.0.2 ping statistics --- 00:07:48.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.010 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:07:48.010 11:11:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:48.010 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:48.010 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:07:48.010 00:07:48.010 --- 10.0.0.3 ping statistics --- 00:07:48.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.010 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:48.010 11:11:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:48.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:07:48.010 00:07:48.010 --- 10.0.0.1 ping statistics --- 00:07:48.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.010 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:07:48.010 11:11:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.010 11:11:29 -- nvmf/common.sh@421 -- # return 0 00:07:48.010 11:11:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:48.010 11:11:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.010 11:11:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:48.010 11:11:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:48.010 11:11:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.010 11:11:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:48.010 11:11:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:48.010 11:11:29 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:07:48.010 11:11:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:48.010 11:11:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.010 11:11:29 -- common/autotest_common.sh@10 -- # set +x 00:07:48.010 ************************************ 00:07:48.010 START TEST nvmf_host_management 00:07:48.010 ************************************ 00:07:48.010 11:11:29 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:07:48.010 11:11:29 -- target/host_management.sh@69 -- # starttarget 00:07:48.010 11:11:29 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:48.010 11:11:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:48.010 11:11:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:48.010 11:11:29 -- common/autotest_common.sh@10 -- # set +x 00:07:48.010 11:11:29 -- nvmf/common.sh@469 -- # nvmfpid=59874 00:07:48.010 11:11:29 -- nvmf/common.sh@470 -- # waitforlisten 59874 00:07:48.010 11:11:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:48.010 11:11:29 -- common/autotest_common.sh@819 -- # '[' -z 59874 ']' 00:07:48.010 11:11:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.011 11:11:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:48.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.011 11:11:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.011 11:11:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:48.011 11:11:29 -- common/autotest_common.sh@10 -- # set +x 00:07:48.011 [2024-10-13 11:11:29.555307] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:48.011 [2024-10-13 11:11:29.555426] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.270 [2024-10-13 11:11:29.700172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.270 [2024-10-13 11:11:29.773724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:48.270 [2024-10-13 11:11:29.773899] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.270 [2024-10-13 11:11:29.773915] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.270 [2024-10-13 11:11:29.773926] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.270 [2024-10-13 11:11:29.774108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.270 [2024-10-13 11:11:29.774591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.270 [2024-10-13 11:11:29.774641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:48.270 [2024-10-13 11:11:29.774651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.207 11:11:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:49.207 11:11:30 -- common/autotest_common.sh@852 -- # return 0 00:07:49.207 11:11:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:49.207 11:11:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:49.207 11:11:30 -- common/autotest_common.sh@10 -- # set +x 00:07:49.207 11:11:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.207 11:11:30 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:49.207 11:11:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.207 11:11:30 -- common/autotest_common.sh@10 -- # set +x 00:07:49.207 [2024-10-13 11:11:30.633896] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.207 11:11:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.207 11:11:30 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:49.207 11:11:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:49.207 11:11:30 -- common/autotest_common.sh@10 -- # set +x 00:07:49.207 11:11:30 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:49.207 11:11:30 -- target/host_management.sh@23 -- # cat 00:07:49.207 11:11:30 -- target/host_management.sh@30 -- # rpc_cmd 00:07:49.207 11:11:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.207 11:11:30 -- common/autotest_common.sh@10 -- # set +x 00:07:49.207 Malloc0 00:07:49.207 [2024-10-13 11:11:30.706445] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.207 11:11:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.207 11:11:30 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:49.207 11:11:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:49.207 11:11:30 -- common/autotest_common.sh@10 -- # set +x 00:07:49.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:49.207 11:11:30 -- target/host_management.sh@73 -- # perfpid=59933 00:07:49.207 11:11:30 -- target/host_management.sh@74 -- # waitforlisten 59933 /var/tmp/bdevperf.sock 00:07:49.207 11:11:30 -- common/autotest_common.sh@819 -- # '[' -z 59933 ']' 00:07:49.207 11:11:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:49.207 11:11:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:49.207 11:11:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:49.207 11:11:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:49.207 11:11:30 -- common/autotest_common.sh@10 -- # set +x 00:07:49.207 11:11:30 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:49.207 11:11:30 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:49.207 11:11:30 -- nvmf/common.sh@520 -- # config=() 00:07:49.207 11:11:30 -- nvmf/common.sh@520 -- # local subsystem config 00:07:49.207 11:11:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:49.207 11:11:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:49.207 { 00:07:49.207 "params": { 00:07:49.207 "name": "Nvme$subsystem", 00:07:49.207 "trtype": "$TEST_TRANSPORT", 00:07:49.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:49.207 "adrfam": "ipv4", 00:07:49.207 "trsvcid": "$NVMF_PORT", 00:07:49.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:49.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:49.207 "hdgst": ${hdgst:-false}, 00:07:49.207 "ddgst": ${ddgst:-false} 00:07:49.207 }, 00:07:49.207 "method": "bdev_nvme_attach_controller" 00:07:49.207 } 00:07:49.207 EOF 00:07:49.207 )") 00:07:49.207 11:11:30 -- nvmf/common.sh@542 -- # cat 00:07:49.207 11:11:30 -- nvmf/common.sh@544 -- # jq . 00:07:49.207 11:11:30 -- nvmf/common.sh@545 -- # IFS=, 00:07:49.207 11:11:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:49.207 "params": { 00:07:49.207 "name": "Nvme0", 00:07:49.207 "trtype": "tcp", 00:07:49.207 "traddr": "10.0.0.2", 00:07:49.207 "adrfam": "ipv4", 00:07:49.207 "trsvcid": "4420", 00:07:49.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:49.207 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:49.207 "hdgst": false, 00:07:49.207 "ddgst": false 00:07:49.207 }, 00:07:49.207 "method": "bdev_nvme_attach_controller" 00:07:49.207 }' 00:07:49.466 [2024-10-13 11:11:30.810369] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:49.466 [2024-10-13 11:11:30.810465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59933 ] 00:07:49.466 [2024-10-13 11:11:30.948893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.466 [2024-10-13 11:11:31.003255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.725 Running I/O for 10 seconds... 00:07:50.293 11:11:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:50.293 11:11:31 -- common/autotest_common.sh@852 -- # return 0 00:07:50.293 11:11:31 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:50.293 11:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.293 11:11:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.293 11:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.293 11:11:31 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.293 11:11:31 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:50.293 11:11:31 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:50.293 11:11:31 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:50.293 11:11:31 -- target/host_management.sh@52 -- # local ret=1 00:07:50.293 11:11:31 -- target/host_management.sh@53 -- # local i 00:07:50.293 11:11:31 -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:50.293 11:11:31 -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:50.293 11:11:31 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:50.293 11:11:31 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:50.293 11:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.293 11:11:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.293 11:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.555 11:11:31 -- target/host_management.sh@55 -- # read_io_count=2040 00:07:50.555 11:11:31 -- target/host_management.sh@58 -- # '[' 2040 -ge 100 ']' 00:07:50.555 11:11:31 -- target/host_management.sh@59 -- # ret=0 00:07:50.555 11:11:31 -- target/host_management.sh@60 -- # break 00:07:50.555 11:11:31 -- target/host_management.sh@64 -- # return 0 00:07:50.555 11:11:31 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:50.555 11:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.555 11:11:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.555 [2024-10-13 11:11:31.907815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.907999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a52d00 is same with the state(5) to be set 00:07:50.555 [2024-10-13 11:11:31.908315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.555 [2024-10-13 11:11:31.908703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.555 [2024-10-13 11:11:31.908733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.908744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.908755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.908765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.908776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.908785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.908796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.908805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.908816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.908825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.908837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.908846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.908857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.908867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.908878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.908887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.908898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.908907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.908918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.908927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.908938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.908947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.908958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.908967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.908979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.908988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.556 [2024-10-13 11:11:31.909642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.556 [2024-10-13 11:11:31.909652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.557 [2024-10-13 11:11:31.909663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.557 [2024-10-13 11:11:31.909673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.557 [2024-10-13 11:11:31.909684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.557 [2024-10-13 11:11:31.909693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.557 [2024-10-13 11:11:31.909705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.557 [2024-10-13 11:11:31.909729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.557 [2024-10-13 11:11:31.909740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.557 [2024-10-13 11:11:31.909749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.557 [2024-10-13 11:11:31.909760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.557 [2024-10-13 11:11:31.909769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.557 [2024-10-13 11:11:31.909792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.557 [2024-10-13 11:11:31.909803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.557 [2024-10-13 11:11:31.909880] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a60400 was disconnected and freed. reset controller. 00:07:50.557 [2024-10-13 11:11:31.911181] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:50.557 task offset: 17920 on job bdev=Nvme0n1 fails 00:07:50.557 00:07:50.557 Latency(us) 00:07:50.557 [2024-10-13T11:11:32.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.557 [2024-10-13T11:11:32.159Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:50.557 [2024-10-13T11:11:32.159Z] Job: Nvme0n1 ended in about 0.77 seconds with error 00:07:50.557 Verification LBA range: start 0x0 length 0x400 00:07:50.557 Nvme0n1 : 0.77 2815.96 176.00 83.01 0.00 21754.10 7268.54 28716.68 00:07:50.557 [2024-10-13T11:11:32.159Z] =================================================================================================================== 00:07:50.557 [2024-10-13T11:11:32.159Z] Total : 2815.96 176.00 83.01 0.00 21754.10 7268.54 28716.68 00:07:50.557 [2024-10-13 11:11:31.913426] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.557 [2024-10-13 11:11:31.913587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a86150 (9): Bad file descriptor 00:07:50.557 11:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.557 [2024-10-13 11:11:31.916462] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not al 11:11:31 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:50.557 low host 'nqn.2016-06.io.spdk:host0' 00:07:50.557 [2024-10-13 11:11:31.916778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLO 11:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.557 CK OFFSET 0x0 len:0x400 00:07:50.557 11:11:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.557 [2024-10-13 11:11:31.917109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:50.557 [2024-10-13 11:11:31.917280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:50.557 [2024-10-13 11:11:31.917536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:50.557 [2024-10-13 11:11:31.917700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:50.557 [2024-10-13 11:11:31.917853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a86150 00:07:50.557 [2024-10-13 11:11:31.918022] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a86150 (9): Bad file descriptor 00:07:50.557 [2024-10-13 11:11:31.918192] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:50.557 [2024-10-13 11:11:31.918426] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:50.557 [2024-10-13 11:11:31.918455] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:50.557 [2024-10-13 11:11:31.918479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:50.557 11:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.557 11:11:31 -- target/host_management.sh@87 -- # sleep 1 00:07:51.494 11:11:32 -- target/host_management.sh@91 -- # kill -9 59933 00:07:51.494 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (59933) - No such process 00:07:51.494 11:11:32 -- target/host_management.sh@91 -- # true 00:07:51.494 11:11:32 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:51.494 11:11:32 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:51.494 11:11:32 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:51.494 11:11:32 -- nvmf/common.sh@520 -- # config=() 00:07:51.494 11:11:32 -- nvmf/common.sh@520 -- # local subsystem config 00:07:51.494 11:11:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:51.494 11:11:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:51.494 { 00:07:51.494 "params": { 00:07:51.494 "name": "Nvme$subsystem", 00:07:51.494 "trtype": "$TEST_TRANSPORT", 00:07:51.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.494 "adrfam": "ipv4", 00:07:51.494 "trsvcid": "$NVMF_PORT", 00:07:51.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.494 "hdgst": ${hdgst:-false}, 00:07:51.494 "ddgst": ${ddgst:-false} 00:07:51.494 }, 00:07:51.494 "method": "bdev_nvme_attach_controller" 00:07:51.494 } 00:07:51.494 EOF 00:07:51.494 )") 00:07:51.494 11:11:32 -- nvmf/common.sh@542 -- # cat 00:07:51.494 11:11:32 -- nvmf/common.sh@544 -- # jq . 00:07:51.494 11:11:32 -- nvmf/common.sh@545 -- # IFS=, 00:07:51.494 11:11:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:51.494 "params": { 00:07:51.494 "name": "Nvme0", 00:07:51.494 "trtype": "tcp", 00:07:51.494 "traddr": "10.0.0.2", 00:07:51.494 "adrfam": "ipv4", 00:07:51.494 "trsvcid": "4420", 00:07:51.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:51.494 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:51.494 "hdgst": false, 00:07:51.494 "ddgst": false 00:07:51.494 }, 00:07:51.494 "method": "bdev_nvme_attach_controller" 00:07:51.494 }' 00:07:51.494 [2024-10-13 11:11:32.982681] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:51.494 [2024-10-13 11:11:32.982811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59971 ] 00:07:51.754 [2024-10-13 11:11:33.120849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.754 [2024-10-13 11:11:33.177770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.754 Running I/O for 1 seconds... 00:07:53.133 00:07:53.133 Latency(us) 00:07:53.133 [2024-10-13T11:11:34.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.133 [2024-10-13T11:11:34.735Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:53.133 Verification LBA range: start 0x0 length 0x400 00:07:53.133 Nvme0n1 : 1.02 3032.36 189.52 0.00 0.00 20773.19 3172.54 27763.43 00:07:53.133 [2024-10-13T11:11:34.735Z] =================================================================================================================== 00:07:53.133 [2024-10-13T11:11:34.735Z] Total : 3032.36 189.52 0.00 0.00 20773.19 3172.54 27763.43 00:07:53.133 11:11:34 -- target/host_management.sh@101 -- # stoptarget 00:07:53.133 11:11:34 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:53.133 11:11:34 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:53.133 11:11:34 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:53.133 11:11:34 -- target/host_management.sh@40 -- # nvmftestfini 00:07:53.133 11:11:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:53.133 11:11:34 -- nvmf/common.sh@116 -- # sync 00:07:53.133 11:11:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:53.133 11:11:34 -- nvmf/common.sh@119 -- # set +e 00:07:53.133 11:11:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:53.133 11:11:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:53.133 rmmod nvme_tcp 00:07:53.133 rmmod nvme_fabrics 00:07:53.133 rmmod nvme_keyring 00:07:53.133 11:11:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:53.133 11:11:34 -- nvmf/common.sh@123 -- # set -e 00:07:53.133 11:11:34 -- nvmf/common.sh@124 -- # return 0 00:07:53.133 11:11:34 -- nvmf/common.sh@477 -- # '[' -n 59874 ']' 00:07:53.133 11:11:34 -- nvmf/common.sh@478 -- # killprocess 59874 00:07:53.133 11:11:34 -- common/autotest_common.sh@926 -- # '[' -z 59874 ']' 00:07:53.133 11:11:34 -- common/autotest_common.sh@930 -- # kill -0 59874 00:07:53.133 11:11:34 -- common/autotest_common.sh@931 -- # uname 00:07:53.133 11:11:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:53.133 11:11:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59874 00:07:53.133 killing process with pid 59874 00:07:53.133 11:11:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:07:53.133 11:11:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:07:53.133 11:11:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59874' 00:07:53.133 11:11:34 -- common/autotest_common.sh@945 -- # kill 59874 00:07:53.133 11:11:34 -- common/autotest_common.sh@950 -- # wait 59874 00:07:53.392 [2024-10-13 11:11:34.870160] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:53.392 11:11:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:53.392 11:11:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:53.392 11:11:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:53.392 11:11:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.392 11:11:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:53.392 11:11:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.392 11:11:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.392 11:11:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.392 11:11:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:53.392 00:07:53.392 real 0m5.441s 00:07:53.392 user 0m22.994s 00:07:53.392 sys 0m1.219s 00:07:53.392 11:11:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.392 11:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:53.392 ************************************ 00:07:53.392 END TEST nvmf_host_management 00:07:53.392 ************************************ 00:07:53.392 11:11:34 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:07:53.392 00:07:53.392 real 0m6.010s 00:07:53.392 user 0m23.128s 00:07:53.392 sys 0m1.449s 00:07:53.392 11:11:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.392 11:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:53.392 ************************************ 00:07:53.392 END TEST nvmf_host_management 00:07:53.392 ************************************ 00:07:53.651 11:11:35 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:53.651 11:11:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:53.651 11:11:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.651 11:11:35 -- common/autotest_common.sh@10 -- # set +x 00:07:53.651 ************************************ 00:07:53.651 START TEST nvmf_lvol 00:07:53.651 ************************************ 00:07:53.651 11:11:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:53.651 * Looking for test storage... 00:07:53.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.651 11:11:35 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.651 11:11:35 -- nvmf/common.sh@7 -- # uname -s 00:07:53.651 11:11:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.651 11:11:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.651 11:11:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.651 11:11:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.651 11:11:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.651 11:11:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.651 11:11:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.651 11:11:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.651 11:11:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.651 11:11:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.651 11:11:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:07:53.651 11:11:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:07:53.651 11:11:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.651 11:11:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.651 11:11:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.651 11:11:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.651 11:11:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.651 11:11:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.651 11:11:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.651 11:11:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.651 11:11:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.651 11:11:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.651 11:11:35 -- paths/export.sh@5 -- # export PATH 00:07:53.651 11:11:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.651 11:11:35 -- nvmf/common.sh@46 -- # : 0 00:07:53.651 11:11:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:53.651 11:11:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:53.651 11:11:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:53.651 11:11:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.651 11:11:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.651 11:11:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:53.651 11:11:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:53.652 11:11:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:53.652 11:11:35 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.652 11:11:35 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:53.652 11:11:35 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:53.652 11:11:35 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:53.652 11:11:35 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.652 11:11:35 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:53.652 11:11:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:53.652 11:11:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.652 11:11:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:53.652 11:11:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:53.652 11:11:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:53.652 11:11:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.652 11:11:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.652 11:11:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.652 11:11:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:53.652 11:11:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:53.652 11:11:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:53.652 11:11:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:53.652 11:11:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:53.652 11:11:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:53.652 11:11:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.652 11:11:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.652 11:11:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:53.652 11:11:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:53.652 11:11:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:53.652 11:11:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:53.652 11:11:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:53.652 11:11:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.652 11:11:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:53.652 11:11:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:53.652 11:11:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:53.652 11:11:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:53.652 11:11:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:53.652 11:11:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:53.652 Cannot find device "nvmf_tgt_br" 00:07:53.652 11:11:35 -- nvmf/common.sh@154 -- # true 00:07:53.652 11:11:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:53.652 Cannot find device "nvmf_tgt_br2" 00:07:53.652 11:11:35 -- nvmf/common.sh@155 -- # true 00:07:53.652 11:11:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:53.652 11:11:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:53.652 Cannot find device "nvmf_tgt_br" 00:07:53.652 11:11:35 -- nvmf/common.sh@157 -- # true 00:07:53.652 11:11:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:53.652 Cannot find device "nvmf_tgt_br2" 00:07:53.652 11:11:35 -- nvmf/common.sh@158 -- # true 00:07:53.652 11:11:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:53.911 11:11:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:53.911 11:11:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:53.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.911 11:11:35 -- nvmf/common.sh@161 -- # true 00:07:53.911 11:11:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:53.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.911 11:11:35 -- nvmf/common.sh@162 -- # true 00:07:53.911 11:11:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:53.911 11:11:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:53.911 11:11:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:53.911 11:11:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:53.911 11:11:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:53.911 11:11:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:53.911 11:11:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:53.911 11:11:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:53.911 11:11:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:53.911 11:11:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:53.911 11:11:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:53.911 11:11:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:53.911 11:11:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:53.911 11:11:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:53.911 11:11:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:53.911 11:11:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:53.911 11:11:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:53.911 11:11:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:53.911 11:11:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:53.911 11:11:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:53.911 11:11:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:53.911 11:11:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:53.911 11:11:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:53.911 11:11:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:53.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:07:53.911 00:07:53.911 --- 10.0.0.2 ping statistics --- 00:07:53.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.911 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:07:53.911 11:11:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:53.911 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:53.911 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:07:53.911 00:07:53.911 --- 10.0.0.3 ping statistics --- 00:07:53.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.911 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:53.911 11:11:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:53.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:53.911 00:07:53.911 --- 10.0.0.1 ping statistics --- 00:07:53.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.911 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:53.911 11:11:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.911 11:11:35 -- nvmf/common.sh@421 -- # return 0 00:07:53.911 11:11:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:53.911 11:11:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.911 11:11:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:53.911 11:11:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:53.911 11:11:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.911 11:11:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:53.911 11:11:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:53.911 11:11:35 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:53.911 11:11:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:53.911 11:11:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:53.911 11:11:35 -- common/autotest_common.sh@10 -- # set +x 00:07:53.911 11:11:35 -- nvmf/common.sh@469 -- # nvmfpid=60199 00:07:53.911 11:11:35 -- nvmf/common.sh@470 -- # waitforlisten 60199 00:07:53.911 11:11:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:53.911 11:11:35 -- common/autotest_common.sh@819 -- # '[' -z 60199 ']' 00:07:53.911 11:11:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.911 11:11:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:53.911 11:11:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.911 11:11:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:53.911 11:11:35 -- common/autotest_common.sh@10 -- # set +x 00:07:54.169 [2024-10-13 11:11:35.543632] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:54.169 [2024-10-13 11:11:35.543744] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.169 [2024-10-13 11:11:35.679263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.169 [2024-10-13 11:11:35.731976] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.169 [2024-10-13 11:11:35.732371] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.169 [2024-10-13 11:11:35.732512] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.169 [2024-10-13 11:11:35.732737] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.169 [2024-10-13 11:11:35.732962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.169 [2024-10-13 11:11:35.733103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.169 [2024-10-13 11:11:35.733124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.103 11:11:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:55.103 11:11:36 -- common/autotest_common.sh@852 -- # return 0 00:07:55.104 11:11:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:55.104 11:11:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:55.104 11:11:36 -- common/autotest_common.sh@10 -- # set +x 00:07:55.104 11:11:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.104 11:11:36 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:55.361 [2024-10-13 11:11:36.841234] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.361 11:11:36 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:55.619 11:11:37 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:55.619 11:11:37 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:56.186 11:11:37 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:56.186 11:11:37 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:56.186 11:11:37 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:56.445 11:11:37 -- target/nvmf_lvol.sh@29 -- # lvs=f3a8308e-b641-4421-95f8-915739f67e67 00:07:56.445 11:11:37 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3a8308e-b641-4421-95f8-915739f67e67 lvol 20 00:07:56.703 11:11:38 -- target/nvmf_lvol.sh@32 -- # lvol=0b83c38b-4927-4b84-b3f4-1d9d39a61a91 00:07:56.703 11:11:38 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:56.961 11:11:38 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0b83c38b-4927-4b84-b3f4-1d9d39a61a91 00:07:57.219 11:11:38 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:57.478 [2024-10-13 11:11:38.824786] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.478 11:11:38 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.736 11:11:39 -- target/nvmf_lvol.sh@42 -- # perf_pid=60269 00:07:57.736 11:11:39 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:57.736 11:11:39 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:58.671 11:11:40 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 0b83c38b-4927-4b84-b3f4-1d9d39a61a91 MY_SNAPSHOT 00:07:58.930 11:11:40 -- target/nvmf_lvol.sh@47 -- # snapshot=0eb31051-2713-435b-a7ad-bf9f875b2fa4 00:07:58.930 11:11:40 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 0b83c38b-4927-4b84-b3f4-1d9d39a61a91 30 00:07:59.188 11:11:40 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 0eb31051-2713-435b-a7ad-bf9f875b2fa4 MY_CLONE 00:07:59.447 11:11:40 -- target/nvmf_lvol.sh@49 -- # clone=656ac660-5f40-4f0e-a85e-87d2bffdebb9 00:07:59.447 11:11:40 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 656ac660-5f40-4f0e-a85e-87d2bffdebb9 00:07:59.705 11:11:41 -- target/nvmf_lvol.sh@53 -- # wait 60269 00:08:07.839 Initializing NVMe Controllers 00:08:07.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:07.839 Controller IO queue size 128, less than required. 00:08:07.839 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:07.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:07.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:07.839 Initialization complete. Launching workers. 00:08:07.839 ======================================================== 00:08:07.839 Latency(us) 00:08:07.839 Device Information : IOPS MiB/s Average min max 00:08:07.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9666.39 37.76 13243.90 1935.80 76308.61 00:08:07.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9563.09 37.36 13391.71 2813.88 64723.65 00:08:07.839 ======================================================== 00:08:07.839 Total : 19229.49 75.12 13317.41 1935.80 76308.61 00:08:07.839 00:08:07.839 11:11:49 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:08.098 11:11:49 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0b83c38b-4927-4b84-b3f4-1d9d39a61a91 00:08:08.356 11:11:49 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f3a8308e-b641-4421-95f8-915739f67e67 00:08:08.614 11:11:50 -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:08.614 11:11:50 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:08.614 11:11:50 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:08.614 11:11:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:08.614 11:11:50 -- nvmf/common.sh@116 -- # sync 00:08:08.873 11:11:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:08.873 11:11:50 -- nvmf/common.sh@119 -- # set +e 00:08:08.873 11:11:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:08.873 11:11:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:08.873 rmmod nvme_tcp 00:08:08.873 rmmod nvme_fabrics 00:08:08.873 rmmod nvme_keyring 00:08:08.873 11:11:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:08.873 11:11:50 -- nvmf/common.sh@123 -- # set -e 00:08:08.873 11:11:50 -- nvmf/common.sh@124 -- # return 0 00:08:08.873 11:11:50 -- nvmf/common.sh@477 -- # '[' -n 60199 ']' 00:08:08.873 11:11:50 -- nvmf/common.sh@478 -- # killprocess 60199 00:08:08.873 11:11:50 -- common/autotest_common.sh@926 -- # '[' -z 60199 ']' 00:08:08.873 11:11:50 -- common/autotest_common.sh@930 -- # kill -0 60199 00:08:08.873 11:11:50 -- common/autotest_common.sh@931 -- # uname 00:08:08.873 11:11:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:08.873 11:11:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60199 00:08:08.873 11:11:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:08.873 11:11:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:08.873 killing process with pid 60199 00:08:08.873 11:11:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60199' 00:08:08.873 11:11:50 -- common/autotest_common.sh@945 -- # kill 60199 00:08:08.873 11:11:50 -- common/autotest_common.sh@950 -- # wait 60199 00:08:09.132 11:11:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:09.132 11:11:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:09.132 11:11:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:09.132 11:11:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.132 11:11:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:09.132 11:11:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.132 11:11:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.132 11:11:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.132 11:11:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:09.132 00:08:09.132 real 0m15.586s 00:08:09.132 user 1m4.426s 00:08:09.132 sys 0m4.792s 00:08:09.132 11:11:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.132 ************************************ 00:08:09.132 END TEST nvmf_lvol 00:08:09.132 ************************************ 00:08:09.132 11:11:50 -- common/autotest_common.sh@10 -- # set +x 00:08:09.132 11:11:50 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:09.132 11:11:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:09.132 11:11:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.132 11:11:50 -- common/autotest_common.sh@10 -- # set +x 00:08:09.132 ************************************ 00:08:09.132 START TEST nvmf_lvs_grow 00:08:09.132 ************************************ 00:08:09.132 11:11:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:09.391 * Looking for test storage... 00:08:09.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.391 11:11:50 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:09.391 11:11:50 -- nvmf/common.sh@7 -- # uname -s 00:08:09.391 11:11:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.391 11:11:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.391 11:11:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.391 11:11:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.391 11:11:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.391 11:11:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.391 11:11:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.391 11:11:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.391 11:11:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.391 11:11:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.391 11:11:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:08:09.391 11:11:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:08:09.391 11:11:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.391 11:11:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.391 11:11:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:09.391 11:11:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.391 11:11:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.391 11:11:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.391 11:11:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.391 11:11:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.391 11:11:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.391 11:11:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.391 11:11:50 -- paths/export.sh@5 -- # export PATH 00:08:09.391 11:11:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.391 11:11:50 -- nvmf/common.sh@46 -- # : 0 00:08:09.392 11:11:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:09.392 11:11:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:09.392 11:11:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:09.392 11:11:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.392 11:11:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.392 11:11:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:09.392 11:11:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:09.392 11:11:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:09.392 11:11:50 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.392 11:11:50 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:09.392 11:11:50 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:08:09.392 11:11:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:09.392 11:11:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.392 11:11:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:09.392 11:11:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:09.392 11:11:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:09.392 11:11:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.392 11:11:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.392 11:11:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.392 11:11:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:09.392 11:11:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:09.392 11:11:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:09.392 11:11:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:09.392 11:11:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:09.392 11:11:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:09.392 11:11:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.392 11:11:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.392 11:11:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:09.392 11:11:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:09.392 11:11:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:09.392 11:11:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:09.392 11:11:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:09.392 11:11:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.392 11:11:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:09.392 11:11:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:09.392 11:11:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:09.392 11:11:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:09.392 11:11:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:09.392 11:11:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:09.392 Cannot find device "nvmf_tgt_br" 00:08:09.392 11:11:50 -- nvmf/common.sh@154 -- # true 00:08:09.392 11:11:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:09.392 Cannot find device "nvmf_tgt_br2" 00:08:09.392 11:11:50 -- nvmf/common.sh@155 -- # true 00:08:09.392 11:11:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:09.392 11:11:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:09.392 Cannot find device "nvmf_tgt_br" 00:08:09.392 11:11:50 -- nvmf/common.sh@157 -- # true 00:08:09.392 11:11:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:09.392 Cannot find device "nvmf_tgt_br2" 00:08:09.392 11:11:50 -- nvmf/common.sh@158 -- # true 00:08:09.392 11:11:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:09.392 11:11:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:09.392 11:11:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:09.392 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.392 11:11:50 -- nvmf/common.sh@161 -- # true 00:08:09.392 11:11:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:09.392 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.392 11:11:50 -- nvmf/common.sh@162 -- # true 00:08:09.392 11:11:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:09.392 11:11:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:09.392 11:11:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:09.392 11:11:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:09.392 11:11:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:09.392 11:11:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:09.392 11:11:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:09.392 11:11:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:09.392 11:11:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:09.651 11:11:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:09.651 11:11:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:09.651 11:11:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:09.651 11:11:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:09.651 11:11:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:09.651 11:11:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:09.651 11:11:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:09.651 11:11:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:09.651 11:11:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:09.651 11:11:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:09.651 11:11:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:09.651 11:11:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:09.651 11:11:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:09.651 11:11:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:09.651 11:11:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:09.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:08:09.651 00:08:09.651 --- 10.0.0.2 ping statistics --- 00:08:09.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.651 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:09.651 11:11:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:09.651 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:09.651 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:08:09.651 00:08:09.651 --- 10.0.0.3 ping statistics --- 00:08:09.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.651 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:09.651 11:11:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:09.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:09.651 00:08:09.651 --- 10.0.0.1 ping statistics --- 00:08:09.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.651 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:09.651 11:11:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.651 11:11:51 -- nvmf/common.sh@421 -- # return 0 00:08:09.651 11:11:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:09.651 11:11:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.651 11:11:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:09.651 11:11:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:09.651 11:11:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.651 11:11:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:09.651 11:11:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:09.651 11:11:51 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:08:09.651 11:11:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:09.651 11:11:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:09.651 11:11:51 -- common/autotest_common.sh@10 -- # set +x 00:08:09.651 11:11:51 -- nvmf/common.sh@469 -- # nvmfpid=60595 00:08:09.651 11:11:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:09.651 11:11:51 -- nvmf/common.sh@470 -- # waitforlisten 60595 00:08:09.651 11:11:51 -- common/autotest_common.sh@819 -- # '[' -z 60595 ']' 00:08:09.651 11:11:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.651 11:11:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:09.651 11:11:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.651 11:11:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:09.651 11:11:51 -- common/autotest_common.sh@10 -- # set +x 00:08:09.651 [2024-10-13 11:11:51.165859] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:09.651 [2024-10-13 11:11:51.165963] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.910 [2024-10-13 11:11:51.300544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.910 [2024-10-13 11:11:51.372281] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:09.910 [2024-10-13 11:11:51.372511] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.910 [2024-10-13 11:11:51.372527] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.910 [2024-10-13 11:11:51.372538] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.910 [2024-10-13 11:11:51.372573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.847 11:11:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:10.847 11:11:52 -- common/autotest_common.sh@852 -- # return 0 00:08:10.847 11:11:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:10.847 11:11:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:10.847 11:11:52 -- common/autotest_common.sh@10 -- # set +x 00:08:10.847 11:11:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.847 11:11:52 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:11.106 [2024-10-13 11:11:52.539892] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.106 11:11:52 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:08:11.106 11:11:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:11.106 11:11:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.106 11:11:52 -- common/autotest_common.sh@10 -- # set +x 00:08:11.106 ************************************ 00:08:11.106 START TEST lvs_grow_clean 00:08:11.106 ************************************ 00:08:11.106 11:11:52 -- common/autotest_common.sh@1104 -- # lvs_grow 00:08:11.106 11:11:52 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:11.106 11:11:52 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:11.106 11:11:52 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:11.106 11:11:52 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:11.106 11:11:52 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:11.106 11:11:52 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:11.106 11:11:52 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:11.106 11:11:52 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:11.106 11:11:52 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.403 11:11:52 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:11.403 11:11:52 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:11.685 11:11:53 -- target/nvmf_lvs_grow.sh@28 -- # lvs=854b1960-b64b-4131-9eac-204b436508a2 00:08:11.685 11:11:53 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 854b1960-b64b-4131-9eac-204b436508a2 00:08:11.685 11:11:53 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:11.944 11:11:53 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:11.944 11:11:53 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:11.944 11:11:53 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 854b1960-b64b-4131-9eac-204b436508a2 lvol 150 00:08:12.203 11:11:53 -- target/nvmf_lvs_grow.sh@33 -- # lvol=1e78ef2c-4d79-4243-b869-4a8dd5abe855 00:08:12.203 11:11:53 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:12.203 11:11:53 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:12.462 [2024-10-13 11:11:53.907275] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:12.462 [2024-10-13 11:11:53.907450] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:12.462 true 00:08:12.462 11:11:53 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 854b1960-b64b-4131-9eac-204b436508a2 00:08:12.462 11:11:53 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:12.720 11:11:54 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:12.720 11:11:54 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:12.979 11:11:54 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1e78ef2c-4d79-4243-b869-4a8dd5abe855 00:08:13.237 11:11:54 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:13.237 [2024-10-13 11:11:54.831857] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.497 11:11:54 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:13.497 11:11:55 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=60682 00:08:13.497 11:11:55 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:13.497 11:11:55 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:13.497 11:11:55 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 60682 /var/tmp/bdevperf.sock 00:08:13.497 11:11:55 -- common/autotest_common.sh@819 -- # '[' -z 60682 ']' 00:08:13.497 11:11:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:13.497 11:11:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:13.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:13.497 11:11:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:13.497 11:11:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:13.497 11:11:55 -- common/autotest_common.sh@10 -- # set +x 00:08:13.756 [2024-10-13 11:11:55.109308] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:13.756 [2024-10-13 11:11:55.109453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60682 ] 00:08:13.756 [2024-10-13 11:11:55.245910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.756 [2024-10-13 11:11:55.314195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.693 11:11:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:14.693 11:11:56 -- common/autotest_common.sh@852 -- # return 0 00:08:14.693 11:11:56 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:14.952 Nvme0n1 00:08:14.952 11:11:56 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:15.212 [ 00:08:15.212 { 00:08:15.212 "name": "Nvme0n1", 00:08:15.212 "aliases": [ 00:08:15.212 "1e78ef2c-4d79-4243-b869-4a8dd5abe855" 00:08:15.212 ], 00:08:15.212 "product_name": "NVMe disk", 00:08:15.212 "block_size": 4096, 00:08:15.212 "num_blocks": 38912, 00:08:15.212 "uuid": "1e78ef2c-4d79-4243-b869-4a8dd5abe855", 00:08:15.212 "assigned_rate_limits": { 00:08:15.212 "rw_ios_per_sec": 0, 00:08:15.212 "rw_mbytes_per_sec": 0, 00:08:15.212 "r_mbytes_per_sec": 0, 00:08:15.212 "w_mbytes_per_sec": 0 00:08:15.212 }, 00:08:15.212 "claimed": false, 00:08:15.212 "zoned": false, 00:08:15.212 "supported_io_types": { 00:08:15.212 "read": true, 00:08:15.212 "write": true, 00:08:15.212 "unmap": true, 00:08:15.212 "write_zeroes": true, 00:08:15.212 "flush": true, 00:08:15.212 "reset": true, 00:08:15.212 "compare": true, 00:08:15.212 "compare_and_write": true, 00:08:15.212 "abort": true, 00:08:15.212 "nvme_admin": true, 00:08:15.212 "nvme_io": true 00:08:15.212 }, 00:08:15.212 "driver_specific": { 00:08:15.212 "nvme": [ 00:08:15.212 { 00:08:15.212 "trid": { 00:08:15.212 "trtype": "TCP", 00:08:15.212 "adrfam": "IPv4", 00:08:15.212 "traddr": "10.0.0.2", 00:08:15.212 "trsvcid": "4420", 00:08:15.212 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:15.212 }, 00:08:15.212 "ctrlr_data": { 00:08:15.212 "cntlid": 1, 00:08:15.212 "vendor_id": "0x8086", 00:08:15.212 "model_number": "SPDK bdev Controller", 00:08:15.212 "serial_number": "SPDK0", 00:08:15.212 "firmware_revision": "24.01.1", 00:08:15.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:15.212 "oacs": { 00:08:15.212 "security": 0, 00:08:15.212 "format": 0, 00:08:15.212 "firmware": 0, 00:08:15.212 "ns_manage": 0 00:08:15.212 }, 00:08:15.212 "multi_ctrlr": true, 00:08:15.212 "ana_reporting": false 00:08:15.212 }, 00:08:15.212 "vs": { 00:08:15.212 "nvme_version": "1.3" 00:08:15.212 }, 00:08:15.212 "ns_data": { 00:08:15.212 "id": 1, 00:08:15.212 "can_share": true 00:08:15.212 } 00:08:15.212 } 00:08:15.212 ], 00:08:15.212 "mp_policy": "active_passive" 00:08:15.212 } 00:08:15.212 } 00:08:15.212 ] 00:08:15.212 11:11:56 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=60701 00:08:15.212 11:11:56 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:15.212 11:11:56 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:15.212 Running I/O for 10 seconds... 00:08:16.151 Latency(us) 00:08:16.151 [2024-10-13T11:11:57.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.151 [2024-10-13T11:11:57.753Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.151 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:16.151 [2024-10-13T11:11:57.753Z] =================================================================================================================== 00:08:16.151 [2024-10-13T11:11:57.753Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:16.151 00:08:17.089 11:11:58 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 854b1960-b64b-4131-9eac-204b436508a2 00:08:17.349 [2024-10-13T11:11:58.951Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.349 Nvme0n1 : 2.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:08:17.349 [2024-10-13T11:11:58.951Z] =================================================================================================================== 00:08:17.349 [2024-10-13T11:11:58.951Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:08:17.349 00:08:17.349 true 00:08:17.349 11:11:58 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:17.349 11:11:58 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 854b1960-b64b-4131-9eac-204b436508a2 00:08:17.918 11:11:59 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:17.918 11:11:59 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:17.918 11:11:59 -- target/nvmf_lvs_grow.sh@65 -- # wait 60701 00:08:18.178 [2024-10-13T11:11:59.780Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.178 Nvme0n1 : 3.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:08:18.178 [2024-10-13T11:11:59.780Z] =================================================================================================================== 00:08:18.178 [2024-10-13T11:11:59.780Z] Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:08:18.178 00:08:19.115 [2024-10-13T11:12:00.717Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.115 Nvme0n1 : 4.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:08:19.115 [2024-10-13T11:12:00.717Z] =================================================================================================================== 00:08:19.115 [2024-10-13T11:12:00.717Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:08:19.115 00:08:20.527 [2024-10-13T11:12:02.129Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.527 Nvme0n1 : 5.00 6393.20 24.97 0.00 0.00 0.00 0.00 0.00 00:08:20.527 [2024-10-13T11:12:02.129Z] =================================================================================================================== 00:08:20.527 [2024-10-13T11:12:02.129Z] Total : 6393.20 24.97 0.00 0.00 0.00 0.00 0.00 00:08:20.527 00:08:21.464 [2024-10-13T11:12:03.066Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.464 Nvme0n1 : 6.00 6364.83 24.86 0.00 0.00 0.00 0.00 0.00 00:08:21.464 [2024-10-13T11:12:03.066Z] =================================================================================================================== 00:08:21.464 [2024-10-13T11:12:03.066Z] Total : 6364.83 24.86 0.00 0.00 0.00 0.00 0.00 00:08:21.464 00:08:22.402 [2024-10-13T11:12:04.004Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.402 Nvme0n1 : 7.00 6380.86 24.93 0.00 0.00 0.00 0.00 0.00 00:08:22.402 [2024-10-13T11:12:04.004Z] =================================================================================================================== 00:08:22.402 [2024-10-13T11:12:04.004Z] Total : 6380.86 24.93 0.00 0.00 0.00 0.00 0.00 00:08:22.402 00:08:23.340 [2024-10-13T11:12:04.942Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.340 Nvme0n1 : 8.00 6361.12 24.85 0.00 0.00 0.00 0.00 0.00 00:08:23.340 [2024-10-13T11:12:04.942Z] =================================================================================================================== 00:08:23.340 [2024-10-13T11:12:04.942Z] Total : 6361.12 24.85 0.00 0.00 0.00 0.00 0.00 00:08:23.340 00:08:24.279 [2024-10-13T11:12:05.881Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.279 Nvme0n1 : 9.00 6345.78 24.79 0.00 0.00 0.00 0.00 0.00 00:08:24.279 [2024-10-13T11:12:05.881Z] =================================================================================================================== 00:08:24.279 [2024-10-13T11:12:05.881Z] Total : 6345.78 24.79 0.00 0.00 0.00 0.00 0.00 00:08:24.279 00:08:25.218 [2024-10-13T11:12:06.820Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.218 Nvme0n1 : 10.00 6320.80 24.69 0.00 0.00 0.00 0.00 0.00 00:08:25.218 [2024-10-13T11:12:06.820Z] =================================================================================================================== 00:08:25.218 [2024-10-13T11:12:06.820Z] Total : 6320.80 24.69 0.00 0.00 0.00 0.00 0.00 00:08:25.218 00:08:25.218 00:08:25.218 Latency(us) 00:08:25.218 [2024-10-13T11:12:06.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.218 [2024-10-13T11:12:06.820Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.218 Nvme0n1 : 10.02 6322.26 24.70 0.00 0.00 20240.39 5630.14 69587.32 00:08:25.218 [2024-10-13T11:12:06.820Z] =================================================================================================================== 00:08:25.218 [2024-10-13T11:12:06.820Z] Total : 6322.26 24.70 0.00 0.00 20240.39 5630.14 69587.32 00:08:25.218 0 00:08:25.218 11:12:06 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 60682 00:08:25.218 11:12:06 -- common/autotest_common.sh@926 -- # '[' -z 60682 ']' 00:08:25.218 11:12:06 -- common/autotest_common.sh@930 -- # kill -0 60682 00:08:25.218 11:12:06 -- common/autotest_common.sh@931 -- # uname 00:08:25.218 11:12:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:25.218 11:12:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60682 00:08:25.218 11:12:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:08:25.218 11:12:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:08:25.218 11:12:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60682' 00:08:25.218 killing process with pid 60682 00:08:25.218 Received shutdown signal, test time was about 10.000000 seconds 00:08:25.218 00:08:25.218 Latency(us) 00:08:25.218 [2024-10-13T11:12:06.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.218 [2024-10-13T11:12:06.820Z] =================================================================================================================== 00:08:25.218 [2024-10-13T11:12:06.820Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:25.218 11:12:06 -- common/autotest_common.sh@945 -- # kill 60682 00:08:25.218 11:12:06 -- common/autotest_common.sh@950 -- # wait 60682 00:08:25.477 11:12:06 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:25.737 11:12:07 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 854b1960-b64b-4131-9eac-204b436508a2 00:08:25.737 11:12:07 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:25.996 11:12:07 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:25.996 11:12:07 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:08:25.996 11:12:07 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.256 [2024-10-13 11:12:07.719883] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:26.256 11:12:07 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 854b1960-b64b-4131-9eac-204b436508a2 00:08:26.256 11:12:07 -- common/autotest_common.sh@640 -- # local es=0 00:08:26.256 11:12:07 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 854b1960-b64b-4131-9eac-204b436508a2 00:08:26.256 11:12:07 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.256 11:12:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:26.256 11:12:07 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.256 11:12:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:26.256 11:12:07 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.256 11:12:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:26.256 11:12:07 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.256 11:12:07 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:26.256 11:12:07 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 854b1960-b64b-4131-9eac-204b436508a2 00:08:26.532 request: 00:08:26.532 { 00:08:26.532 "uuid": "854b1960-b64b-4131-9eac-204b436508a2", 00:08:26.532 "method": "bdev_lvol_get_lvstores", 00:08:26.532 "req_id": 1 00:08:26.532 } 00:08:26.532 Got JSON-RPC error response 00:08:26.532 response: 00:08:26.532 { 00:08:26.532 "code": -19, 00:08:26.532 "message": "No such device" 00:08:26.532 } 00:08:26.532 11:12:08 -- common/autotest_common.sh@643 -- # es=1 00:08:26.532 11:12:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:26.532 11:12:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:26.532 11:12:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:26.532 11:12:08 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.790 aio_bdev 00:08:26.790 11:12:08 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 1e78ef2c-4d79-4243-b869-4a8dd5abe855 00:08:26.790 11:12:08 -- common/autotest_common.sh@887 -- # local bdev_name=1e78ef2c-4d79-4243-b869-4a8dd5abe855 00:08:26.790 11:12:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:26.790 11:12:08 -- common/autotest_common.sh@889 -- # local i 00:08:26.790 11:12:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:26.790 11:12:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:26.791 11:12:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:27.050 11:12:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e78ef2c-4d79-4243-b869-4a8dd5abe855 -t 2000 00:08:27.310 [ 00:08:27.310 { 00:08:27.310 "name": "1e78ef2c-4d79-4243-b869-4a8dd5abe855", 00:08:27.310 "aliases": [ 00:08:27.310 "lvs/lvol" 00:08:27.310 ], 00:08:27.310 "product_name": "Logical Volume", 00:08:27.310 "block_size": 4096, 00:08:27.310 "num_blocks": 38912, 00:08:27.310 "uuid": "1e78ef2c-4d79-4243-b869-4a8dd5abe855", 00:08:27.310 "assigned_rate_limits": { 00:08:27.310 "rw_ios_per_sec": 0, 00:08:27.310 "rw_mbytes_per_sec": 0, 00:08:27.310 "r_mbytes_per_sec": 0, 00:08:27.310 "w_mbytes_per_sec": 0 00:08:27.310 }, 00:08:27.310 "claimed": false, 00:08:27.310 "zoned": false, 00:08:27.310 "supported_io_types": { 00:08:27.310 "read": true, 00:08:27.310 "write": true, 00:08:27.310 "unmap": true, 00:08:27.310 "write_zeroes": true, 00:08:27.310 "flush": false, 00:08:27.310 "reset": true, 00:08:27.310 "compare": false, 00:08:27.310 "compare_and_write": false, 00:08:27.310 "abort": false, 00:08:27.310 "nvme_admin": false, 00:08:27.310 "nvme_io": false 00:08:27.310 }, 00:08:27.310 "driver_specific": { 00:08:27.310 "lvol": { 00:08:27.310 "lvol_store_uuid": "854b1960-b64b-4131-9eac-204b436508a2", 00:08:27.310 "base_bdev": "aio_bdev", 00:08:27.310 "thin_provision": false, 00:08:27.310 "snapshot": false, 00:08:27.310 "clone": false, 00:08:27.310 "esnap_clone": false 00:08:27.310 } 00:08:27.310 } 00:08:27.310 } 00:08:27.310 ] 00:08:27.310 11:12:08 -- common/autotest_common.sh@895 -- # return 0 00:08:27.310 11:12:08 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 854b1960-b64b-4131-9eac-204b436508a2 00:08:27.310 11:12:08 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:27.629 11:12:08 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:27.629 11:12:08 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 854b1960-b64b-4131-9eac-204b436508a2 00:08:27.629 11:12:08 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:27.908 11:12:09 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:27.908 11:12:09 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1e78ef2c-4d79-4243-b869-4a8dd5abe855 00:08:27.908 11:12:09 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 854b1960-b64b-4131-9eac-204b436508a2 00:08:28.167 11:12:09 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:28.426 11:12:09 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.685 ************************************ 00:08:28.686 END TEST lvs_grow_clean 00:08:28.686 ************************************ 00:08:28.686 00:08:28.686 real 0m17.686s 00:08:28.686 user 0m16.925s 00:08:28.686 sys 0m2.227s 00:08:28.686 11:12:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.686 11:12:10 -- common/autotest_common.sh@10 -- # set +x 00:08:28.945 11:12:10 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:28.945 11:12:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:28.945 11:12:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.945 11:12:10 -- common/autotest_common.sh@10 -- # set +x 00:08:28.945 ************************************ 00:08:28.945 START TEST lvs_grow_dirty 00:08:28.945 ************************************ 00:08:28.945 11:12:10 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:08:28.945 11:12:10 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:28.945 11:12:10 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:28.945 11:12:10 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:28.945 11:12:10 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:28.945 11:12:10 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:28.945 11:12:10 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:28.945 11:12:10 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.945 11:12:10 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.945 11:12:10 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.205 11:12:10 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:29.205 11:12:10 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:29.464 11:12:10 -- target/nvmf_lvs_grow.sh@28 -- # lvs=9d1d7b73-d61e-46d1-94c9-d16edc026467 00:08:29.464 11:12:10 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:29.464 11:12:10 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d1d7b73-d61e-46d1-94c9-d16edc026467 00:08:29.723 11:12:11 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:29.723 11:12:11 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:29.723 11:12:11 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9d1d7b73-d61e-46d1-94c9-d16edc026467 lvol 150 00:08:29.982 11:12:11 -- target/nvmf_lvs_grow.sh@33 -- # lvol=4c47c265-08be-4f87-a624-2b6338737ccf 00:08:29.982 11:12:11 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:29.982 11:12:11 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:30.240 [2024-10-13 11:12:11.746220] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:30.240 [2024-10-13 11:12:11.746628] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:30.240 true 00:08:30.240 11:12:11 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d1d7b73-d61e-46d1-94c9-d16edc026467 00:08:30.241 11:12:11 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:30.499 11:12:11 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:30.499 11:12:11 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:30.759 11:12:12 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4c47c265-08be-4f87-a624-2b6338737ccf 00:08:31.018 11:12:12 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:31.278 11:12:12 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.537 11:12:12 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:31.537 11:12:12 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=60941 00:08:31.537 11:12:12 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.537 11:12:12 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 60941 /var/tmp/bdevperf.sock 00:08:31.537 11:12:12 -- common/autotest_common.sh@819 -- # '[' -z 60941 ']' 00:08:31.537 11:12:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.537 11:12:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:31.538 11:12:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.538 11:12:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:31.538 11:12:12 -- common/autotest_common.sh@10 -- # set +x 00:08:31.538 [2024-10-13 11:12:13.030181] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:31.538 [2024-10-13 11:12:13.030520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60941 ] 00:08:31.797 [2024-10-13 11:12:13.168009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.797 [2024-10-13 11:12:13.219811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.735 11:12:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:32.735 11:12:13 -- common/autotest_common.sh@852 -- # return 0 00:08:32.735 11:12:13 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:32.735 Nvme0n1 00:08:32.735 11:12:14 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:32.995 [ 00:08:32.995 { 00:08:32.995 "name": "Nvme0n1", 00:08:32.995 "aliases": [ 00:08:32.995 "4c47c265-08be-4f87-a624-2b6338737ccf" 00:08:32.995 ], 00:08:32.995 "product_name": "NVMe disk", 00:08:32.995 "block_size": 4096, 00:08:32.995 "num_blocks": 38912, 00:08:32.995 "uuid": "4c47c265-08be-4f87-a624-2b6338737ccf", 00:08:32.995 "assigned_rate_limits": { 00:08:32.995 "rw_ios_per_sec": 0, 00:08:32.995 "rw_mbytes_per_sec": 0, 00:08:32.995 "r_mbytes_per_sec": 0, 00:08:32.995 "w_mbytes_per_sec": 0 00:08:32.995 }, 00:08:32.995 "claimed": false, 00:08:32.995 "zoned": false, 00:08:32.995 "supported_io_types": { 00:08:32.995 "read": true, 00:08:32.995 "write": true, 00:08:32.995 "unmap": true, 00:08:32.995 "write_zeroes": true, 00:08:32.995 "flush": true, 00:08:32.995 "reset": true, 00:08:32.995 "compare": true, 00:08:32.995 "compare_and_write": true, 00:08:32.995 "abort": true, 00:08:32.995 "nvme_admin": true, 00:08:32.995 "nvme_io": true 00:08:32.995 }, 00:08:32.995 "driver_specific": { 00:08:32.995 "nvme": [ 00:08:32.995 { 00:08:32.995 "trid": { 00:08:32.995 "trtype": "TCP", 00:08:32.995 "adrfam": "IPv4", 00:08:32.995 "traddr": "10.0.0.2", 00:08:32.995 "trsvcid": "4420", 00:08:32.995 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:32.995 }, 00:08:32.995 "ctrlr_data": { 00:08:32.995 "cntlid": 1, 00:08:32.995 "vendor_id": "0x8086", 00:08:32.995 "model_number": "SPDK bdev Controller", 00:08:32.995 "serial_number": "SPDK0", 00:08:32.995 "firmware_revision": "24.01.1", 00:08:32.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:32.995 "oacs": { 00:08:32.995 "security": 0, 00:08:32.995 "format": 0, 00:08:32.995 "firmware": 0, 00:08:32.995 "ns_manage": 0 00:08:32.995 }, 00:08:32.995 "multi_ctrlr": true, 00:08:32.995 "ana_reporting": false 00:08:32.995 }, 00:08:32.995 "vs": { 00:08:32.995 "nvme_version": "1.3" 00:08:32.995 }, 00:08:32.995 "ns_data": { 00:08:32.995 "id": 1, 00:08:32.995 "can_share": true 00:08:32.995 } 00:08:32.995 } 00:08:32.995 ], 00:08:32.995 "mp_policy": "active_passive" 00:08:32.995 } 00:08:32.995 } 00:08:32.995 ] 00:08:32.995 11:12:14 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=60970 00:08:32.995 11:12:14 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:32.995 11:12:14 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:33.254 Running I/O for 10 seconds... 00:08:34.192 Latency(us) 00:08:34.192 [2024-10-13T11:12:15.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.192 [2024-10-13T11:12:15.794Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.192 Nvme0n1 : 1.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:34.192 [2024-10-13T11:12:15.794Z] =================================================================================================================== 00:08:34.192 [2024-10-13T11:12:15.794Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:34.192 00:08:35.128 11:12:16 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9d1d7b73-d61e-46d1-94c9-d16edc026467 00:08:35.128 [2024-10-13T11:12:16.730Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.128 Nvme0n1 : 2.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:35.128 [2024-10-13T11:12:16.730Z] =================================================================================================================== 00:08:35.128 [2024-10-13T11:12:16.730Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:35.128 00:08:35.386 true 00:08:35.386 11:12:16 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d1d7b73-d61e-46d1-94c9-d16edc026467 00:08:35.387 11:12:16 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:35.645 11:12:17 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:35.645 11:12:17 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:35.645 11:12:17 -- target/nvmf_lvs_grow.sh@65 -- # wait 60970 00:08:36.214 [2024-10-13T11:12:17.816Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.214 Nvme0n1 : 3.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:08:36.214 [2024-10-13T11:12:17.816Z] =================================================================================================================== 00:08:36.214 [2024-10-13T11:12:17.816Z] Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:08:36.214 00:08:37.150 [2024-10-13T11:12:18.752Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.150 Nvme0n1 : 4.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:37.150 [2024-10-13T11:12:18.752Z] =================================================================================================================== 00:08:37.150 [2024-10-13T11:12:18.752Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:37.150 00:08:38.087 [2024-10-13T11:12:19.689Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.087 Nvme0n1 : 5.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:38.087 [2024-10-13T11:12:19.689Z] =================================================================================================================== 00:08:38.087 [2024-10-13T11:12:19.689Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:38.087 00:08:39.464 [2024-10-13T11:12:21.066Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.464 Nvme0n1 : 6.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:39.464 [2024-10-13T11:12:21.066Z] =================================================================================================================== 00:08:39.464 [2024-10-13T11:12:21.066Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:39.464 00:08:40.399 [2024-10-13T11:12:22.001Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.399 Nvme0n1 : 7.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:40.399 [2024-10-13T11:12:22.001Z] =================================================================================================================== 00:08:40.399 [2024-10-13T11:12:22.001Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:40.399 00:08:41.335 [2024-10-13T11:12:22.937Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.335 Nvme0n1 : 8.00 6374.00 24.90 0.00 0.00 0.00 0.00 0.00 00:08:41.335 [2024-10-13T11:12:22.937Z] =================================================================================================================== 00:08:41.335 [2024-10-13T11:12:22.937Z] Total : 6374.00 24.90 0.00 0.00 0.00 0.00 0.00 00:08:41.335 00:08:42.270 [2024-10-13T11:12:23.872Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.270 Nvme0n1 : 9.00 6357.22 24.83 0.00 0.00 0.00 0.00 0.00 00:08:42.270 [2024-10-13T11:12:23.872Z] =================================================================================================================== 00:08:42.270 [2024-10-13T11:12:23.872Z] Total : 6357.22 24.83 0.00 0.00 0.00 0.00 0.00 00:08:42.270 00:08:43.231 [2024-10-13T11:12:24.833Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.231 Nvme0n1 : 10.00 6343.80 24.78 0.00 0.00 0.00 0.00 0.00 00:08:43.231 [2024-10-13T11:12:24.834Z] =================================================================================================================== 00:08:43.232 [2024-10-13T11:12:24.834Z] Total : 6343.80 24.78 0.00 0.00 0.00 0.00 0.00 00:08:43.232 00:08:43.232 00:08:43.232 Latency(us) 00:08:43.232 [2024-10-13T11:12:24.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.232 [2024-10-13T11:12:24.834Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.232 Nvme0n1 : 10.01 6350.74 24.81 0.00 0.00 20151.45 13047.62 170631.91 00:08:43.232 [2024-10-13T11:12:24.834Z] =================================================================================================================== 00:08:43.232 [2024-10-13T11:12:24.834Z] Total : 6350.74 24.81 0.00 0.00 20151.45 13047.62 170631.91 00:08:43.232 0 00:08:43.232 11:12:24 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 60941 00:08:43.232 11:12:24 -- common/autotest_common.sh@926 -- # '[' -z 60941 ']' 00:08:43.232 11:12:24 -- common/autotest_common.sh@930 -- # kill -0 60941 00:08:43.232 11:12:24 -- common/autotest_common.sh@931 -- # uname 00:08:43.232 11:12:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:43.232 11:12:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60941 00:08:43.232 killing process with pid 60941 00:08:43.232 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.232 00:08:43.232 Latency(us) 00:08:43.232 [2024-10-13T11:12:24.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.232 [2024-10-13T11:12:24.834Z] =================================================================================================================== 00:08:43.232 [2024-10-13T11:12:24.834Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.232 11:12:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:08:43.232 11:12:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:08:43.232 11:12:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60941' 00:08:43.232 11:12:24 -- common/autotest_common.sh@945 -- # kill 60941 00:08:43.232 11:12:24 -- common/autotest_common.sh@950 -- # wait 60941 00:08:43.490 11:12:24 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:43.749 11:12:25 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d1d7b73-d61e-46d1-94c9-d16edc026467 00:08:43.749 11:12:25 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:44.008 11:12:25 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:44.008 11:12:25 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:08:44.008 11:12:25 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 60595 00:08:44.008 11:12:25 -- target/nvmf_lvs_grow.sh@74 -- # wait 60595 00:08:44.008 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 60595 Killed "${NVMF_APP[@]}" "$@" 00:08:44.008 11:12:25 -- target/nvmf_lvs_grow.sh@74 -- # true 00:08:44.008 11:12:25 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:08:44.008 11:12:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:44.008 11:12:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:44.008 11:12:25 -- common/autotest_common.sh@10 -- # set +x 00:08:44.008 11:12:25 -- nvmf/common.sh@469 -- # nvmfpid=61102 00:08:44.008 11:12:25 -- nvmf/common.sh@470 -- # waitforlisten 61102 00:08:44.008 11:12:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:44.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.008 11:12:25 -- common/autotest_common.sh@819 -- # '[' -z 61102 ']' 00:08:44.008 11:12:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.008 11:12:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:44.008 11:12:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.008 11:12:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:44.008 11:12:25 -- common/autotest_common.sh@10 -- # set +x 00:08:44.008 [2024-10-13 11:12:25.519045] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:44.008 [2024-10-13 11:12:25.519380] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.266 [2024-10-13 11:12:25.659842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.266 [2024-10-13 11:12:25.709100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:44.266 [2024-10-13 11:12:25.709528] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.266 [2024-10-13 11:12:25.709690] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.266 [2024-10-13 11:12:25.709709] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.266 [2024-10-13 11:12:25.709736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.201 11:12:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:45.201 11:12:26 -- common/autotest_common.sh@852 -- # return 0 00:08:45.201 11:12:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:45.201 11:12:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:45.201 11:12:26 -- common/autotest_common.sh@10 -- # set +x 00:08:45.201 11:12:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.201 11:12:26 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.201 [2024-10-13 11:12:26.796492] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:45.201 [2024-10-13 11:12:26.796758] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:45.201 [2024-10-13 11:12:26.796977] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:45.464 11:12:26 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:08:45.464 11:12:26 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 4c47c265-08be-4f87-a624-2b6338737ccf 00:08:45.464 11:12:26 -- common/autotest_common.sh@887 -- # local bdev_name=4c47c265-08be-4f87-a624-2b6338737ccf 00:08:45.464 11:12:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:45.464 11:12:26 -- common/autotest_common.sh@889 -- # local i 00:08:45.464 11:12:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:45.464 11:12:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:45.464 11:12:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:45.723 11:12:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c47c265-08be-4f87-a624-2b6338737ccf -t 2000 00:08:45.981 [ 00:08:45.981 { 00:08:45.981 "name": "4c47c265-08be-4f87-a624-2b6338737ccf", 00:08:45.981 "aliases": [ 00:08:45.981 "lvs/lvol" 00:08:45.981 ], 00:08:45.981 "product_name": "Logical Volume", 00:08:45.981 "block_size": 4096, 00:08:45.981 "num_blocks": 38912, 00:08:45.981 "uuid": "4c47c265-08be-4f87-a624-2b6338737ccf", 00:08:45.981 "assigned_rate_limits": { 00:08:45.981 "rw_ios_per_sec": 0, 00:08:45.981 "rw_mbytes_per_sec": 0, 00:08:45.981 "r_mbytes_per_sec": 0, 00:08:45.981 "w_mbytes_per_sec": 0 00:08:45.981 }, 00:08:45.981 "claimed": false, 00:08:45.981 "zoned": false, 00:08:45.981 "supported_io_types": { 00:08:45.981 "read": true, 00:08:45.981 "write": true, 00:08:45.981 "unmap": true, 00:08:45.981 "write_zeroes": true, 00:08:45.981 "flush": false, 00:08:45.981 "reset": true, 00:08:45.981 "compare": false, 00:08:45.981 "compare_and_write": false, 00:08:45.981 "abort": false, 00:08:45.981 "nvme_admin": false, 00:08:45.981 "nvme_io": false 00:08:45.981 }, 00:08:45.981 "driver_specific": { 00:08:45.981 "lvol": { 00:08:45.981 "lvol_store_uuid": "9d1d7b73-d61e-46d1-94c9-d16edc026467", 00:08:45.981 "base_bdev": "aio_bdev", 00:08:45.981 "thin_provision": false, 00:08:45.981 "snapshot": false, 00:08:45.981 "clone": false, 00:08:45.981 "esnap_clone": false 00:08:45.981 } 00:08:45.981 } 00:08:45.981 } 00:08:45.981 ] 00:08:45.981 11:12:27 -- common/autotest_common.sh@895 -- # return 0 00:08:45.981 11:12:27 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:08:45.981 11:12:27 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d1d7b73-d61e-46d1-94c9-d16edc026467 00:08:46.240 11:12:27 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:08:46.240 11:12:27 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:08:46.240 11:12:27 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d1d7b73-d61e-46d1-94c9-d16edc026467 00:08:46.499 11:12:27 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:08:46.499 11:12:27 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.758 [2024-10-13 11:12:28.130068] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:46.758 11:12:28 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d1d7b73-d61e-46d1-94c9-d16edc026467 00:08:46.758 11:12:28 -- common/autotest_common.sh@640 -- # local es=0 00:08:46.758 11:12:28 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d1d7b73-d61e-46d1-94c9-d16edc026467 00:08:46.758 11:12:28 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.758 11:12:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:46.758 11:12:28 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.758 11:12:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:46.758 11:12:28 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.758 11:12:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:46.758 11:12:28 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.758 11:12:28 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:46.758 11:12:28 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d1d7b73-d61e-46d1-94c9-d16edc026467 00:08:47.017 request: 00:08:47.017 { 00:08:47.017 "uuid": "9d1d7b73-d61e-46d1-94c9-d16edc026467", 00:08:47.017 "method": "bdev_lvol_get_lvstores", 00:08:47.017 "req_id": 1 00:08:47.017 } 00:08:47.017 Got JSON-RPC error response 00:08:47.017 response: 00:08:47.017 { 00:08:47.017 "code": -19, 00:08:47.017 "message": "No such device" 00:08:47.017 } 00:08:47.017 11:12:28 -- common/autotest_common.sh@643 -- # es=1 00:08:47.017 11:12:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:47.017 11:12:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:47.017 11:12:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:47.017 11:12:28 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.276 aio_bdev 00:08:47.276 11:12:28 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 4c47c265-08be-4f87-a624-2b6338737ccf 00:08:47.276 11:12:28 -- common/autotest_common.sh@887 -- # local bdev_name=4c47c265-08be-4f87-a624-2b6338737ccf 00:08:47.276 11:12:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:47.276 11:12:28 -- common/autotest_common.sh@889 -- # local i 00:08:47.276 11:12:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:47.276 11:12:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:47.276 11:12:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.535 11:12:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c47c265-08be-4f87-a624-2b6338737ccf -t 2000 00:08:47.535 [ 00:08:47.535 { 00:08:47.535 "name": "4c47c265-08be-4f87-a624-2b6338737ccf", 00:08:47.535 "aliases": [ 00:08:47.535 "lvs/lvol" 00:08:47.535 ], 00:08:47.535 "product_name": "Logical Volume", 00:08:47.535 "block_size": 4096, 00:08:47.535 "num_blocks": 38912, 00:08:47.535 "uuid": "4c47c265-08be-4f87-a624-2b6338737ccf", 00:08:47.535 "assigned_rate_limits": { 00:08:47.535 "rw_ios_per_sec": 0, 00:08:47.535 "rw_mbytes_per_sec": 0, 00:08:47.535 "r_mbytes_per_sec": 0, 00:08:47.535 "w_mbytes_per_sec": 0 00:08:47.535 }, 00:08:47.535 "claimed": false, 00:08:47.535 "zoned": false, 00:08:47.535 "supported_io_types": { 00:08:47.535 "read": true, 00:08:47.535 "write": true, 00:08:47.535 "unmap": true, 00:08:47.535 "write_zeroes": true, 00:08:47.535 "flush": false, 00:08:47.535 "reset": true, 00:08:47.535 "compare": false, 00:08:47.535 "compare_and_write": false, 00:08:47.535 "abort": false, 00:08:47.535 "nvme_admin": false, 00:08:47.535 "nvme_io": false 00:08:47.535 }, 00:08:47.535 "driver_specific": { 00:08:47.535 "lvol": { 00:08:47.535 "lvol_store_uuid": "9d1d7b73-d61e-46d1-94c9-d16edc026467", 00:08:47.535 "base_bdev": "aio_bdev", 00:08:47.535 "thin_provision": false, 00:08:47.535 "snapshot": false, 00:08:47.535 "clone": false, 00:08:47.535 "esnap_clone": false 00:08:47.535 } 00:08:47.535 } 00:08:47.535 } 00:08:47.535 ] 00:08:47.535 11:12:29 -- common/autotest_common.sh@895 -- # return 0 00:08:47.535 11:12:29 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d1d7b73-d61e-46d1-94c9-d16edc026467 00:08:47.535 11:12:29 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:47.794 11:12:29 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:47.794 11:12:29 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:47.794 11:12:29 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d1d7b73-d61e-46d1-94c9-d16edc026467 00:08:48.052 11:12:29 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:48.052 11:12:29 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4c47c265-08be-4f87-a624-2b6338737ccf 00:08:48.311 11:12:29 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d1d7b73-d61e-46d1-94c9-d16edc026467 00:08:48.570 11:12:30 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.828 11:12:30 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.397 ************************************ 00:08:49.397 END TEST lvs_grow_dirty 00:08:49.397 ************************************ 00:08:49.397 00:08:49.397 real 0m20.378s 00:08:49.397 user 0m40.541s 00:08:49.397 sys 0m9.290s 00:08:49.397 11:12:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.397 11:12:30 -- common/autotest_common.sh@10 -- # set +x 00:08:49.397 11:12:30 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:49.397 11:12:30 -- common/autotest_common.sh@796 -- # type=--id 00:08:49.397 11:12:30 -- common/autotest_common.sh@797 -- # id=0 00:08:49.397 11:12:30 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:08:49.397 11:12:30 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:49.397 11:12:30 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:08:49.397 11:12:30 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:08:49.397 11:12:30 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:08:49.397 11:12:30 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:49.397 nvmf_trace.0 00:08:49.397 11:12:30 -- common/autotest_common.sh@811 -- # return 0 00:08:49.397 11:12:30 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:49.397 11:12:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:49.397 11:12:30 -- nvmf/common.sh@116 -- # sync 00:08:49.656 11:12:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:49.656 11:12:31 -- nvmf/common.sh@119 -- # set +e 00:08:49.656 11:12:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:49.656 11:12:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:49.656 rmmod nvme_tcp 00:08:49.656 rmmod nvme_fabrics 00:08:49.656 rmmod nvme_keyring 00:08:49.656 11:12:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:49.656 11:12:31 -- nvmf/common.sh@123 -- # set -e 00:08:49.656 11:12:31 -- nvmf/common.sh@124 -- # return 0 00:08:49.656 11:12:31 -- nvmf/common.sh@477 -- # '[' -n 61102 ']' 00:08:49.656 11:12:31 -- nvmf/common.sh@478 -- # killprocess 61102 00:08:49.656 11:12:31 -- common/autotest_common.sh@926 -- # '[' -z 61102 ']' 00:08:49.656 11:12:31 -- common/autotest_common.sh@930 -- # kill -0 61102 00:08:49.656 11:12:31 -- common/autotest_common.sh@931 -- # uname 00:08:49.656 11:12:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:49.656 11:12:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61102 00:08:49.916 11:12:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:49.916 11:12:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:49.916 killing process with pid 61102 00:08:49.916 11:12:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61102' 00:08:49.916 11:12:31 -- common/autotest_common.sh@945 -- # kill 61102 00:08:49.916 11:12:31 -- common/autotest_common.sh@950 -- # wait 61102 00:08:49.916 11:12:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:49.916 11:12:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:49.916 11:12:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:49.916 11:12:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:49.916 11:12:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:49.916 11:12:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.916 11:12:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.916 11:12:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.916 11:12:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:49.916 00:08:49.916 real 0m40.821s 00:08:49.916 user 1m4.152s 00:08:49.916 sys 0m12.416s 00:08:49.916 11:12:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.916 ************************************ 00:08:49.916 END TEST nvmf_lvs_grow 00:08:49.916 ************************************ 00:08:49.916 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:08:50.176 11:12:31 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.176 11:12:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:50.176 11:12:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.176 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:08:50.176 ************************************ 00:08:50.176 START TEST nvmf_bdev_io_wait 00:08:50.176 ************************************ 00:08:50.176 11:12:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.176 * Looking for test storage... 00:08:50.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:50.176 11:12:31 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.176 11:12:31 -- nvmf/common.sh@7 -- # uname -s 00:08:50.176 11:12:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.176 11:12:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.176 11:12:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.176 11:12:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.176 11:12:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.176 11:12:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.176 11:12:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.176 11:12:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.176 11:12:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.176 11:12:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.176 11:12:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:08:50.176 11:12:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:08:50.176 11:12:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.176 11:12:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.176 11:12:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:50.176 11:12:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.176 11:12:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.176 11:12:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.176 11:12:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.176 11:12:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.176 11:12:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.176 11:12:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.176 11:12:31 -- paths/export.sh@5 -- # export PATH 00:08:50.176 11:12:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.176 11:12:31 -- nvmf/common.sh@46 -- # : 0 00:08:50.176 11:12:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:50.176 11:12:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:50.176 11:12:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:50.176 11:12:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.176 11:12:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.176 11:12:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:50.176 11:12:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:50.176 11:12:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:50.176 11:12:31 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.176 11:12:31 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.176 11:12:31 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:50.176 11:12:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:50.176 11:12:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.177 11:12:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:50.177 11:12:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:50.177 11:12:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:50.177 11:12:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.177 11:12:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.177 11:12:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.177 11:12:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:50.177 11:12:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:50.177 11:12:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:50.177 11:12:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:50.177 11:12:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:50.177 11:12:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:50.177 11:12:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.177 11:12:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.177 11:12:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:50.177 11:12:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:50.177 11:12:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:50.177 11:12:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:50.177 11:12:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:50.177 11:12:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.177 11:12:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:50.177 11:12:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:50.177 11:12:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:50.177 11:12:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:50.177 11:12:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:50.177 11:12:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:50.177 Cannot find device "nvmf_tgt_br" 00:08:50.177 11:12:31 -- nvmf/common.sh@154 -- # true 00:08:50.177 11:12:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.177 Cannot find device "nvmf_tgt_br2" 00:08:50.177 11:12:31 -- nvmf/common.sh@155 -- # true 00:08:50.177 11:12:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:50.177 11:12:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:50.177 Cannot find device "nvmf_tgt_br" 00:08:50.177 11:12:31 -- nvmf/common.sh@157 -- # true 00:08:50.177 11:12:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:50.177 Cannot find device "nvmf_tgt_br2" 00:08:50.177 11:12:31 -- nvmf/common.sh@158 -- # true 00:08:50.177 11:12:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:50.177 11:12:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:50.177 11:12:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.177 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.177 11:12:31 -- nvmf/common.sh@161 -- # true 00:08:50.177 11:12:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.436 11:12:31 -- nvmf/common.sh@162 -- # true 00:08:50.436 11:12:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:50.436 11:12:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:50.436 11:12:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:50.436 11:12:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:50.436 11:12:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:50.436 11:12:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:50.436 11:12:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:50.436 11:12:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:50.436 11:12:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:50.436 11:12:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:50.436 11:12:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:50.436 11:12:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:50.436 11:12:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:50.436 11:12:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:50.436 11:12:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:50.436 11:12:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:50.436 11:12:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:50.436 11:12:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:50.436 11:12:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:50.436 11:12:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:50.436 11:12:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:50.436 11:12:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:50.436 11:12:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:50.436 11:12:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:50.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:08:50.436 00:08:50.436 --- 10.0.0.2 ping statistics --- 00:08:50.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.436 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:50.436 11:12:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:50.436 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:50.436 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:08:50.436 00:08:50.436 --- 10.0.0.3 ping statistics --- 00:08:50.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.436 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:50.436 11:12:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:50.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:50.436 00:08:50.436 --- 10.0.0.1 ping statistics --- 00:08:50.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.436 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:50.436 11:12:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.436 11:12:31 -- nvmf/common.sh@421 -- # return 0 00:08:50.436 11:12:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:50.436 11:12:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.437 11:12:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:50.437 11:12:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:50.437 11:12:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.437 11:12:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:50.437 11:12:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:50.437 11:12:32 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:50.437 11:12:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:50.437 11:12:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:50.437 11:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:50.437 11:12:32 -- nvmf/common.sh@469 -- # nvmfpid=61410 00:08:50.437 11:12:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:50.437 11:12:32 -- nvmf/common.sh@470 -- # waitforlisten 61410 00:08:50.437 11:12:32 -- common/autotest_common.sh@819 -- # '[' -z 61410 ']' 00:08:50.437 11:12:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.437 11:12:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:50.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.437 11:12:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.437 11:12:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:50.437 11:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:50.696 [2024-10-13 11:12:32.075576] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:50.696 [2024-10-13 11:12:32.075680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.696 [2024-10-13 11:12:32.215990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.696 [2024-10-13 11:12:32.288066] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:50.696 [2024-10-13 11:12:32.288270] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.696 [2024-10-13 11:12:32.288287] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.696 [2024-10-13 11:12:32.288298] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.696 [2024-10-13 11:12:32.288485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.696 [2024-10-13 11:12:32.288631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.696 [2024-10-13 11:12:32.289264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.696 [2024-10-13 11:12:32.289303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.955 11:12:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:50.955 11:12:32 -- common/autotest_common.sh@852 -- # return 0 00:08:50.955 11:12:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:50.955 11:12:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:50.955 11:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:50.955 11:12:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.955 11:12:32 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:50.955 11:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.955 11:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:50.955 11:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.955 11:12:32 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:50.955 11:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.955 11:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:50.955 11:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.955 11:12:32 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:50.955 11:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.955 11:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:50.955 [2024-10-13 11:12:32.415466] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.955 11:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.955 11:12:32 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:50.955 11:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.955 11:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:50.955 Malloc0 00:08:50.955 11:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.955 11:12:32 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:50.955 11:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.955 11:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:50.955 11:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.955 11:12:32 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:50.955 11:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.955 11:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:50.955 11:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.955 11:12:32 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.955 11:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.955 11:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:50.955 [2024-10-13 11:12:32.470416] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.955 11:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.955 11:12:32 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=61432 00:08:50.956 11:12:32 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:50.956 11:12:32 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:50.956 11:12:32 -- target/bdev_io_wait.sh@30 -- # READ_PID=61434 00:08:50.956 11:12:32 -- nvmf/common.sh@520 -- # config=() 00:08:50.956 11:12:32 -- nvmf/common.sh@520 -- # local subsystem config 00:08:50.956 11:12:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:50.956 11:12:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:50.956 { 00:08:50.956 "params": { 00:08:50.956 "name": "Nvme$subsystem", 00:08:50.956 "trtype": "$TEST_TRANSPORT", 00:08:50.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.956 "adrfam": "ipv4", 00:08:50.956 "trsvcid": "$NVMF_PORT", 00:08:50.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.956 "hdgst": ${hdgst:-false}, 00:08:50.956 "ddgst": ${ddgst:-false} 00:08:50.956 }, 00:08:50.956 "method": "bdev_nvme_attach_controller" 00:08:50.956 } 00:08:50.956 EOF 00:08:50.956 )") 00:08:50.956 11:12:32 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:50.956 11:12:32 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:50.956 11:12:32 -- nvmf/common.sh@520 -- # config=() 00:08:50.956 11:12:32 -- nvmf/common.sh@520 -- # local subsystem config 00:08:50.956 11:12:32 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=61436 00:08:50.956 11:12:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:50.956 11:12:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:50.956 { 00:08:50.956 "params": { 00:08:50.956 "name": "Nvme$subsystem", 00:08:50.956 "trtype": "$TEST_TRANSPORT", 00:08:50.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.956 "adrfam": "ipv4", 00:08:50.956 "trsvcid": "$NVMF_PORT", 00:08:50.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.956 "hdgst": ${hdgst:-false}, 00:08:50.956 "ddgst": ${ddgst:-false} 00:08:50.956 }, 00:08:50.956 "method": "bdev_nvme_attach_controller" 00:08:50.956 } 00:08:50.956 EOF 00:08:50.956 )") 00:08:50.956 11:12:32 -- nvmf/common.sh@542 -- # cat 00:08:50.956 11:12:32 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:50.956 11:12:32 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=61439 00:08:50.956 11:12:32 -- target/bdev_io_wait.sh@35 -- # sync 00:08:50.956 11:12:32 -- nvmf/common.sh@542 -- # cat 00:08:50.956 11:12:32 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:50.956 11:12:32 -- nvmf/common.sh@520 -- # config=() 00:08:50.956 11:12:32 -- nvmf/common.sh@520 -- # local subsystem config 00:08:50.956 11:12:32 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:50.956 11:12:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:50.956 11:12:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:50.956 { 00:08:50.956 "params": { 00:08:50.956 "name": "Nvme$subsystem", 00:08:50.956 "trtype": "$TEST_TRANSPORT", 00:08:50.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.956 "adrfam": "ipv4", 00:08:50.956 "trsvcid": "$NVMF_PORT", 00:08:50.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.956 "hdgst": ${hdgst:-false}, 00:08:50.956 "ddgst": ${ddgst:-false} 00:08:50.956 }, 00:08:50.956 "method": "bdev_nvme_attach_controller" 00:08:50.956 } 00:08:50.956 EOF 00:08:50.956 )") 00:08:50.956 11:12:32 -- nvmf/common.sh@544 -- # jq . 00:08:50.956 11:12:32 -- nvmf/common.sh@544 -- # jq . 00:08:50.956 11:12:32 -- nvmf/common.sh@545 -- # IFS=, 00:08:50.956 11:12:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:50.956 "params": { 00:08:50.956 "name": "Nvme1", 00:08:50.956 "trtype": "tcp", 00:08:50.956 "traddr": "10.0.0.2", 00:08:50.956 "adrfam": "ipv4", 00:08:50.956 "trsvcid": "4420", 00:08:50.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.956 "hdgst": false, 00:08:50.956 "ddgst": false 00:08:50.956 }, 00:08:50.956 "method": "bdev_nvme_attach_controller" 00:08:50.956 }' 00:08:50.956 11:12:32 -- nvmf/common.sh@545 -- # IFS=, 00:08:50.956 11:12:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:50.956 "params": { 00:08:50.956 "name": "Nvme1", 00:08:50.956 "trtype": "tcp", 00:08:50.956 "traddr": "10.0.0.2", 00:08:50.956 "adrfam": "ipv4", 00:08:50.956 "trsvcid": "4420", 00:08:50.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.956 "hdgst": false, 00:08:50.956 "ddgst": false 00:08:50.956 }, 00:08:50.956 "method": "bdev_nvme_attach_controller" 00:08:50.956 }' 00:08:50.956 11:12:32 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:50.956 11:12:32 -- nvmf/common.sh@520 -- # config=() 00:08:50.956 11:12:32 -- nvmf/common.sh@520 -- # local subsystem config 00:08:50.956 11:12:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:50.956 11:12:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:50.956 { 00:08:50.956 "params": { 00:08:50.956 "name": "Nvme$subsystem", 00:08:50.956 "trtype": "$TEST_TRANSPORT", 00:08:50.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.956 "adrfam": "ipv4", 00:08:50.956 "trsvcid": "$NVMF_PORT", 00:08:50.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.956 "hdgst": ${hdgst:-false}, 00:08:50.956 "ddgst": ${ddgst:-false} 00:08:50.956 }, 00:08:50.956 "method": "bdev_nvme_attach_controller" 00:08:50.956 } 00:08:50.956 EOF 00:08:50.956 )") 00:08:50.956 11:12:32 -- nvmf/common.sh@542 -- # cat 00:08:50.956 11:12:32 -- nvmf/common.sh@542 -- # cat 00:08:50.956 11:12:32 -- nvmf/common.sh@544 -- # jq . 00:08:50.956 11:12:32 -- nvmf/common.sh@545 -- # IFS=, 00:08:50.956 11:12:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:50.956 "params": { 00:08:50.956 "name": "Nvme1", 00:08:50.956 "trtype": "tcp", 00:08:50.956 "traddr": "10.0.0.2", 00:08:50.956 "adrfam": "ipv4", 00:08:50.956 "trsvcid": "4420", 00:08:50.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.956 "hdgst": false, 00:08:50.956 "ddgst": false 00:08:50.956 }, 00:08:50.956 "method": "bdev_nvme_attach_controller" 00:08:50.956 }' 00:08:50.956 11:12:32 -- nvmf/common.sh@544 -- # jq . 00:08:50.956 11:12:32 -- nvmf/common.sh@545 -- # IFS=, 00:08:50.956 11:12:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:50.956 "params": { 00:08:50.956 "name": "Nvme1", 00:08:50.956 "trtype": "tcp", 00:08:50.956 "traddr": "10.0.0.2", 00:08:50.956 "adrfam": "ipv4", 00:08:50.956 "trsvcid": "4420", 00:08:50.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.956 "hdgst": false, 00:08:50.956 "ddgst": false 00:08:50.956 }, 00:08:50.956 "method": "bdev_nvme_attach_controller" 00:08:50.956 }' 00:08:50.957 11:12:32 -- target/bdev_io_wait.sh@37 -- # wait 61432 00:08:50.957 [2024-10-13 11:12:32.538441] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:50.957 [2024-10-13 11:12:32.538526] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:50.957 [2024-10-13 11:12:32.546448] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:50.957 [2024-10-13 11:12:32.546522] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:51.215 [2024-10-13 11:12:32.555203] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:51.215 [2024-10-13 11:12:32.555439] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:51.215 [2024-10-13 11:12:32.562996] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:51.215 [2024-10-13 11:12:32.563072] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:51.215 [2024-10-13 11:12:32.714810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.215 [2024-10-13 11:12:32.765587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.215 [2024-10-13 11:12:32.768645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:51.215 [2024-10-13 11:12:32.808654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.474 [2024-10-13 11:12:32.819091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:08:51.474 [2024-10-13 11:12:32.852071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.474 [2024-10-13 11:12:32.860931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:08:51.474 Running I/O for 1 seconds... 00:08:51.474 [2024-10-13 11:12:32.904165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:08:51.474 Running I/O for 1 seconds... 00:08:51.474 Running I/O for 1 seconds... 00:08:51.474 Running I/O for 1 seconds... 00:08:52.411 00:08:52.411 Latency(us) 00:08:52.411 [2024-10-13T11:12:34.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.411 [2024-10-13T11:12:34.013Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:52.411 Nvme1n1 : 1.02 6279.42 24.53 0.00 0.00 20295.64 8221.79 35270.28 00:08:52.411 [2024-10-13T11:12:34.013Z] =================================================================================================================== 00:08:52.411 [2024-10-13T11:12:34.013Z] Total : 6279.42 24.53 0.00 0.00 20295.64 8221.79 35270.28 00:08:52.411 00:08:52.411 Latency(us) 00:08:52.411 [2024-10-13T11:12:34.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.411 [2024-10-13T11:12:34.013Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:52.411 Nvme1n1 : 1.01 8719.39 34.06 0.00 0.00 14600.34 9830.40 27286.81 00:08:52.411 [2024-10-13T11:12:34.013Z] =================================================================================================================== 00:08:52.411 [2024-10-13T11:12:34.013Z] Total : 8719.39 34.06 0.00 0.00 14600.34 9830.40 27286.81 00:08:52.411 00:08:52.411 Latency(us) 00:08:52.411 [2024-10-13T11:12:34.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.411 [2024-10-13T11:12:34.013Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:52.411 Nvme1n1 : 1.00 172742.24 674.77 0.00 0.00 738.28 325.82 1355.40 00:08:52.411 [2024-10-13T11:12:34.013Z] =================================================================================================================== 00:08:52.411 [2024-10-13T11:12:34.013Z] Total : 172742.24 674.77 0.00 0.00 738.28 325.82 1355.40 00:08:52.669 00:08:52.669 Latency(us) 00:08:52.669 [2024-10-13T11:12:34.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.669 [2024-10-13T11:12:34.271Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:52.669 Nvme1n1 : 1.01 6511.19 25.43 0.00 0.00 19600.17 5093.93 43849.54 00:08:52.669 [2024-10-13T11:12:34.271Z] =================================================================================================================== 00:08:52.669 [2024-10-13T11:12:34.271Z] Total : 6511.19 25.43 0.00 0.00 19600.17 5093.93 43849.54 00:08:52.669 11:12:34 -- target/bdev_io_wait.sh@38 -- # wait 61434 00:08:52.669 11:12:34 -- target/bdev_io_wait.sh@39 -- # wait 61436 00:08:52.669 11:12:34 -- target/bdev_io_wait.sh@40 -- # wait 61439 00:08:52.669 11:12:34 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:52.669 11:12:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.669 11:12:34 -- common/autotest_common.sh@10 -- # set +x 00:08:52.669 11:12:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.669 11:12:34 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:52.669 11:12:34 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:52.669 11:12:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:52.669 11:12:34 -- nvmf/common.sh@116 -- # sync 00:08:52.928 11:12:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:52.928 11:12:34 -- nvmf/common.sh@119 -- # set +e 00:08:52.928 11:12:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:52.928 11:12:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:52.928 rmmod nvme_tcp 00:08:52.928 rmmod nvme_fabrics 00:08:52.928 rmmod nvme_keyring 00:08:52.928 11:12:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:52.928 11:12:34 -- nvmf/common.sh@123 -- # set -e 00:08:52.928 11:12:34 -- nvmf/common.sh@124 -- # return 0 00:08:52.928 11:12:34 -- nvmf/common.sh@477 -- # '[' -n 61410 ']' 00:08:52.928 11:12:34 -- nvmf/common.sh@478 -- # killprocess 61410 00:08:52.928 11:12:34 -- common/autotest_common.sh@926 -- # '[' -z 61410 ']' 00:08:52.928 11:12:34 -- common/autotest_common.sh@930 -- # kill -0 61410 00:08:52.928 11:12:34 -- common/autotest_common.sh@931 -- # uname 00:08:52.928 11:12:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:52.928 11:12:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61410 00:08:52.928 11:12:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:52.928 11:12:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:52.928 killing process with pid 61410 00:08:52.928 11:12:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61410' 00:08:52.928 11:12:34 -- common/autotest_common.sh@945 -- # kill 61410 00:08:52.928 11:12:34 -- common/autotest_common.sh@950 -- # wait 61410 00:08:53.187 11:12:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:53.187 11:12:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:53.187 11:12:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:53.187 11:12:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.187 11:12:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:53.187 11:12:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.188 11:12:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.188 11:12:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.188 11:12:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:53.188 ************************************ 00:08:53.188 END TEST nvmf_bdev_io_wait 00:08:53.188 ************************************ 00:08:53.188 00:08:53.188 real 0m3.028s 00:08:53.188 user 0m13.361s 00:08:53.188 sys 0m1.845s 00:08:53.188 11:12:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.188 11:12:34 -- common/autotest_common.sh@10 -- # set +x 00:08:53.188 11:12:34 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:53.188 11:12:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:53.188 11:12:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:53.188 11:12:34 -- common/autotest_common.sh@10 -- # set +x 00:08:53.188 ************************************ 00:08:53.188 START TEST nvmf_queue_depth 00:08:53.188 ************************************ 00:08:53.188 11:12:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:53.188 * Looking for test storage... 00:08:53.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:53.188 11:12:34 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:53.188 11:12:34 -- nvmf/common.sh@7 -- # uname -s 00:08:53.188 11:12:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.188 11:12:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.188 11:12:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.188 11:12:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.188 11:12:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.188 11:12:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.188 11:12:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.188 11:12:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.188 11:12:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.188 11:12:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.188 11:12:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:08:53.188 11:12:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:08:53.188 11:12:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.188 11:12:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.188 11:12:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:53.188 11:12:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:53.188 11:12:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.188 11:12:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.188 11:12:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.188 11:12:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.188 11:12:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.188 11:12:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.188 11:12:34 -- paths/export.sh@5 -- # export PATH 00:08:53.188 11:12:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.188 11:12:34 -- nvmf/common.sh@46 -- # : 0 00:08:53.188 11:12:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:53.188 11:12:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:53.188 11:12:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:53.188 11:12:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.188 11:12:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.188 11:12:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:53.188 11:12:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:53.188 11:12:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:53.188 11:12:34 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:53.188 11:12:34 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:53.188 11:12:34 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:53.188 11:12:34 -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:53.188 11:12:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:53.188 11:12:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.188 11:12:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:53.188 11:12:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:53.188 11:12:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:53.188 11:12:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.188 11:12:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.188 11:12:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.188 11:12:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:53.188 11:12:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:53.188 11:12:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:53.188 11:12:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:53.188 11:12:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:53.188 11:12:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:53.188 11:12:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.188 11:12:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.188 11:12:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:53.188 11:12:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:53.188 11:12:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:53.188 11:12:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:53.188 11:12:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:53.188 11:12:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.188 11:12:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:53.188 11:12:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:53.188 11:12:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:53.188 11:12:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:53.188 11:12:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:53.188 11:12:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:53.188 Cannot find device "nvmf_tgt_br" 00:08:53.188 11:12:34 -- nvmf/common.sh@154 -- # true 00:08:53.188 11:12:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.188 Cannot find device "nvmf_tgt_br2" 00:08:53.188 11:12:34 -- nvmf/common.sh@155 -- # true 00:08:53.188 11:12:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:53.188 11:12:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:53.188 Cannot find device "nvmf_tgt_br" 00:08:53.188 11:12:34 -- nvmf/common.sh@157 -- # true 00:08:53.188 11:12:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:53.188 Cannot find device "nvmf_tgt_br2" 00:08:53.188 11:12:34 -- nvmf/common.sh@158 -- # true 00:08:53.188 11:12:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:53.447 11:12:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:53.447 11:12:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.447 11:12:34 -- nvmf/common.sh@161 -- # true 00:08:53.447 11:12:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.447 11:12:34 -- nvmf/common.sh@162 -- # true 00:08:53.447 11:12:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:53.447 11:12:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:53.447 11:12:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:53.447 11:12:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:53.447 11:12:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:53.447 11:12:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:53.447 11:12:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:53.447 11:12:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:53.447 11:12:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:53.447 11:12:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:53.447 11:12:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:53.447 11:12:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:53.447 11:12:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:53.447 11:12:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:53.447 11:12:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:53.447 11:12:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:53.447 11:12:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:53.447 11:12:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:53.447 11:12:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:53.447 11:12:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:53.447 11:12:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:53.447 11:12:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:53.447 11:12:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:53.447 11:12:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:53.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:08:53.447 00:08:53.447 --- 10.0.0.2 ping statistics --- 00:08:53.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.447 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:08:53.447 11:12:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:53.447 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:53.447 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:08:53.447 00:08:53.447 --- 10.0.0.3 ping statistics --- 00:08:53.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.447 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:53.447 11:12:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:53.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:53.447 00:08:53.447 --- 10.0.0.1 ping statistics --- 00:08:53.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.447 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:53.447 11:12:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.447 11:12:35 -- nvmf/common.sh@421 -- # return 0 00:08:53.447 11:12:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:53.447 11:12:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.447 11:12:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:53.447 11:12:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:53.447 11:12:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.447 11:12:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:53.447 11:12:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:53.447 11:12:35 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:53.447 11:12:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:53.447 11:12:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:53.447 11:12:35 -- common/autotest_common.sh@10 -- # set +x 00:08:53.707 11:12:35 -- nvmf/common.sh@469 -- # nvmfpid=61649 00:08:53.707 11:12:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:53.707 11:12:35 -- nvmf/common.sh@470 -- # waitforlisten 61649 00:08:53.707 11:12:35 -- common/autotest_common.sh@819 -- # '[' -z 61649 ']' 00:08:53.707 11:12:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.707 11:12:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:53.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.707 11:12:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.707 11:12:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:53.707 11:12:35 -- common/autotest_common.sh@10 -- # set +x 00:08:53.707 [2024-10-13 11:12:35.090520] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:53.707 [2024-10-13 11:12:35.090618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.707 [2024-10-13 11:12:35.227336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.707 [2024-10-13 11:12:35.279739] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:53.707 [2024-10-13 11:12:35.279904] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.707 [2024-10-13 11:12:35.279916] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.707 [2024-10-13 11:12:35.279923] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.707 [2024-10-13 11:12:35.279954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.644 11:12:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:54.644 11:12:36 -- common/autotest_common.sh@852 -- # return 0 00:08:54.644 11:12:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:54.644 11:12:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:54.644 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.644 11:12:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.644 11:12:36 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.644 11:12:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.644 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.644 [2024-10-13 11:12:36.102928] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.644 11:12:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.644 11:12:36 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:54.644 11:12:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.644 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.644 Malloc0 00:08:54.644 11:12:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.644 11:12:36 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:54.644 11:12:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.644 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.644 11:12:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.644 11:12:36 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:54.644 11:12:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.644 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.644 11:12:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.644 11:12:36 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.645 11:12:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.645 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.645 [2024-10-13 11:12:36.154837] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.645 11:12:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.645 11:12:36 -- target/queue_depth.sh@30 -- # bdevperf_pid=61681 00:08:54.645 11:12:36 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:54.645 11:12:36 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.645 11:12:36 -- target/queue_depth.sh@33 -- # waitforlisten 61681 /var/tmp/bdevperf.sock 00:08:54.645 11:12:36 -- common/autotest_common.sh@819 -- # '[' -z 61681 ']' 00:08:54.645 11:12:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:54.645 11:12:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:54.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:54.645 11:12:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:54.645 11:12:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:54.645 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.645 [2024-10-13 11:12:36.214542] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:54.645 [2024-10-13 11:12:36.214646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61681 ] 00:08:54.904 [2024-10-13 11:12:36.355315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.904 [2024-10-13 11:12:36.426078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.842 11:12:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:55.842 11:12:37 -- common/autotest_common.sh@852 -- # return 0 00:08:55.842 11:12:37 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:55.842 11:12:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.842 11:12:37 -- common/autotest_common.sh@10 -- # set +x 00:08:55.842 NVMe0n1 00:08:55.842 11:12:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.842 11:12:37 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:55.842 Running I/O for 10 seconds... 00:09:08.104 00:09:08.104 Latency(us) 00:09:08.104 [2024-10-13T11:12:49.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.104 [2024-10-13T11:12:49.706Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:08.104 Verification LBA range: start 0x0 length 0x4000 00:09:08.104 NVMe0n1 : 10.06 15518.43 60.62 0.00 0.00 65747.03 15192.44 58386.62 00:09:08.104 [2024-10-13T11:12:49.706Z] =================================================================================================================== 00:09:08.104 [2024-10-13T11:12:49.706Z] Total : 15518.43 60.62 0.00 0.00 65747.03 15192.44 58386.62 00:09:08.104 0 00:09:08.105 11:12:47 -- target/queue_depth.sh@39 -- # killprocess 61681 00:09:08.105 11:12:47 -- common/autotest_common.sh@926 -- # '[' -z 61681 ']' 00:09:08.105 11:12:47 -- common/autotest_common.sh@930 -- # kill -0 61681 00:09:08.105 11:12:47 -- common/autotest_common.sh@931 -- # uname 00:09:08.105 11:12:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:08.105 11:12:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61681 00:09:08.105 11:12:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:08.105 killing process with pid 61681 00:09:08.105 11:12:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:08.105 11:12:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61681' 00:09:08.105 Received shutdown signal, test time was about 10.000000 seconds 00:09:08.105 00:09:08.105 Latency(us) 00:09:08.105 [2024-10-13T11:12:49.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.105 [2024-10-13T11:12:49.707Z] =================================================================================================================== 00:09:08.105 [2024-10-13T11:12:49.707Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:08.105 11:12:47 -- common/autotest_common.sh@945 -- # kill 61681 00:09:08.105 11:12:47 -- common/autotest_common.sh@950 -- # wait 61681 00:09:08.105 11:12:47 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:08.105 11:12:47 -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:08.105 11:12:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:08.105 11:12:47 -- nvmf/common.sh@116 -- # sync 00:09:08.105 11:12:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:08.105 11:12:47 -- nvmf/common.sh@119 -- # set +e 00:09:08.105 11:12:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:08.105 11:12:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:08.105 rmmod nvme_tcp 00:09:08.105 rmmod nvme_fabrics 00:09:08.105 rmmod nvme_keyring 00:09:08.105 11:12:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:08.105 11:12:47 -- nvmf/common.sh@123 -- # set -e 00:09:08.105 11:12:47 -- nvmf/common.sh@124 -- # return 0 00:09:08.105 11:12:47 -- nvmf/common.sh@477 -- # '[' -n 61649 ']' 00:09:08.105 11:12:47 -- nvmf/common.sh@478 -- # killprocess 61649 00:09:08.105 11:12:47 -- common/autotest_common.sh@926 -- # '[' -z 61649 ']' 00:09:08.105 11:12:47 -- common/autotest_common.sh@930 -- # kill -0 61649 00:09:08.105 11:12:47 -- common/autotest_common.sh@931 -- # uname 00:09:08.105 11:12:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:08.105 11:12:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61649 00:09:08.105 11:12:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:09:08.105 11:12:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:09:08.105 killing process with pid 61649 00:09:08.105 11:12:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61649' 00:09:08.105 11:12:47 -- common/autotest_common.sh@945 -- # kill 61649 00:09:08.105 11:12:47 -- common/autotest_common.sh@950 -- # wait 61649 00:09:08.105 11:12:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:08.105 11:12:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:08.105 11:12:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:08.105 11:12:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:08.105 11:12:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:08.105 11:12:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.105 11:12:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.105 11:12:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.105 11:12:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:08.105 00:09:08.105 real 0m13.477s 00:09:08.105 user 0m23.683s 00:09:08.105 sys 0m1.913s 00:09:08.105 11:12:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.105 11:12:48 -- common/autotest_common.sh@10 -- # set +x 00:09:08.105 ************************************ 00:09:08.105 END TEST nvmf_queue_depth 00:09:08.105 ************************************ 00:09:08.105 11:12:48 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:08.105 11:12:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:08.105 11:12:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.105 11:12:48 -- common/autotest_common.sh@10 -- # set +x 00:09:08.105 ************************************ 00:09:08.105 START TEST nvmf_multipath 00:09:08.105 ************************************ 00:09:08.105 11:12:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:08.105 * Looking for test storage... 00:09:08.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:08.105 11:12:48 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:08.105 11:12:48 -- nvmf/common.sh@7 -- # uname -s 00:09:08.105 11:12:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.105 11:12:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.105 11:12:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.105 11:12:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.105 11:12:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.105 11:12:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.105 11:12:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.105 11:12:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.105 11:12:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.105 11:12:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.105 11:12:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:09:08.105 11:12:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:09:08.105 11:12:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.105 11:12:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.105 11:12:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:08.105 11:12:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.105 11:12:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.105 11:12:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.105 11:12:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.105 11:12:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.105 11:12:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.105 11:12:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.105 11:12:48 -- paths/export.sh@5 -- # export PATH 00:09:08.105 11:12:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.105 11:12:48 -- nvmf/common.sh@46 -- # : 0 00:09:08.105 11:12:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:08.105 11:12:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:08.105 11:12:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:08.105 11:12:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.105 11:12:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.105 11:12:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:08.105 11:12:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:08.105 11:12:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:08.105 11:12:48 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.105 11:12:48 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.105 11:12:48 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:08.105 11:12:48 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.105 11:12:48 -- target/multipath.sh@43 -- # nvmftestinit 00:09:08.105 11:12:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:08.105 11:12:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.105 11:12:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:08.105 11:12:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:08.105 11:12:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:08.105 11:12:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.105 11:12:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.105 11:12:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.105 11:12:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:08.105 11:12:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:08.105 11:12:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:08.105 11:12:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:08.105 11:12:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:08.105 11:12:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:08.105 11:12:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.105 11:12:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.105 11:12:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:08.105 11:12:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:08.105 11:12:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:08.105 11:12:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:08.105 11:12:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:08.105 11:12:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.105 11:12:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:08.105 11:12:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:08.105 11:12:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:08.105 11:12:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:08.105 11:12:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:08.105 11:12:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:08.105 Cannot find device "nvmf_tgt_br" 00:09:08.105 11:12:48 -- nvmf/common.sh@154 -- # true 00:09:08.106 11:12:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:08.106 Cannot find device "nvmf_tgt_br2" 00:09:08.106 11:12:48 -- nvmf/common.sh@155 -- # true 00:09:08.106 11:12:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:08.106 11:12:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:08.106 Cannot find device "nvmf_tgt_br" 00:09:08.106 11:12:48 -- nvmf/common.sh@157 -- # true 00:09:08.106 11:12:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:08.106 Cannot find device "nvmf_tgt_br2" 00:09:08.106 11:12:48 -- nvmf/common.sh@158 -- # true 00:09:08.106 11:12:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:08.106 11:12:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:08.106 11:12:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:08.106 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:08.106 11:12:48 -- nvmf/common.sh@161 -- # true 00:09:08.106 11:12:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:08.106 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:08.106 11:12:48 -- nvmf/common.sh@162 -- # true 00:09:08.106 11:12:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:08.106 11:12:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:08.106 11:12:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:08.106 11:12:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:08.106 11:12:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:08.106 11:12:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:08.106 11:12:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:08.106 11:12:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:08.106 11:12:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:08.106 11:12:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:08.106 11:12:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:08.106 11:12:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:08.106 11:12:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:08.106 11:12:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:08.106 11:12:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:08.106 11:12:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:08.106 11:12:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:08.106 11:12:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:08.106 11:12:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:08.106 11:12:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:08.106 11:12:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:08.106 11:12:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:08.106 11:12:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:08.106 11:12:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:08.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:09:08.106 00:09:08.106 --- 10.0.0.2 ping statistics --- 00:09:08.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.106 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:08.106 11:12:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:08.106 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:08.106 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:09:08.106 00:09:08.106 --- 10.0.0.3 ping statistics --- 00:09:08.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.106 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:08.106 11:12:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:08.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:08.106 00:09:08.106 --- 10.0.0.1 ping statistics --- 00:09:08.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.106 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:08.106 11:12:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.106 11:12:48 -- nvmf/common.sh@421 -- # return 0 00:09:08.106 11:12:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:08.106 11:12:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.106 11:12:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:08.106 11:12:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:08.106 11:12:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.106 11:12:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:08.106 11:12:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:08.106 11:12:48 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:08.106 11:12:48 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:08.106 11:12:48 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:08.106 11:12:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:08.106 11:12:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:08.106 11:12:48 -- common/autotest_common.sh@10 -- # set +x 00:09:08.106 11:12:48 -- nvmf/common.sh@469 -- # nvmfpid=61997 00:09:08.106 11:12:48 -- nvmf/common.sh@470 -- # waitforlisten 61997 00:09:08.106 11:12:48 -- common/autotest_common.sh@819 -- # '[' -z 61997 ']' 00:09:08.106 11:12:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.106 11:12:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.106 11:12:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:08.106 11:12:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.106 11:12:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:08.106 11:12:48 -- common/autotest_common.sh@10 -- # set +x 00:09:08.106 [2024-10-13 11:12:48.647216] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:08.106 [2024-10-13 11:12:48.647305] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.106 [2024-10-13 11:12:48.785477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.106 [2024-10-13 11:12:48.860905] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:08.106 [2024-10-13 11:12:48.861066] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.106 [2024-10-13 11:12:48.861082] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.106 [2024-10-13 11:12:48.861093] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.106 [2024-10-13 11:12:48.861989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.106 [2024-10-13 11:12:48.862163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.106 [2024-10-13 11:12:48.862295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.106 [2024-10-13 11:12:48.862303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.106 11:12:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:08.106 11:12:49 -- common/autotest_common.sh@852 -- # return 0 00:09:08.106 11:12:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:08.106 11:12:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:08.106 11:12:49 -- common/autotest_common.sh@10 -- # set +x 00:09:08.365 11:12:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.365 11:12:49 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:08.624 [2024-10-13 11:12:49.968469] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.624 11:12:50 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:08.883 Malloc0 00:09:08.883 11:12:50 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:09.142 11:12:50 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:09.400 11:12:50 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.659 [2024-10-13 11:12:51.019375] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.659 11:12:51 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:09.659 [2024-10-13 11:12:51.243559] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:09.919 11:12:51 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:09.919 11:12:51 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:10.177 11:12:51 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.177 11:12:51 -- common/autotest_common.sh@1177 -- # local i=0 00:09:10.177 11:12:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.177 11:12:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:09:10.177 11:12:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:09:12.083 11:12:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:09:12.083 11:12:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:09:12.083 11:12:53 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.083 11:12:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:09:12.083 11:12:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.083 11:12:53 -- common/autotest_common.sh@1187 -- # return 0 00:09:12.083 11:12:53 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:12.083 11:12:53 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:12.083 11:12:53 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:12.084 11:12:53 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:12.084 11:12:53 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:12.084 11:12:53 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:12.084 11:12:53 -- target/multipath.sh@38 -- # return 0 00:09:12.084 11:12:53 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:12.084 11:12:53 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:12.084 11:12:53 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:12.084 11:12:53 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:12.084 11:12:53 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:12.084 11:12:53 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:12.084 11:12:53 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:12.084 11:12:53 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:12.084 11:12:53 -- target/multipath.sh@22 -- # local timeout=20 00:09:12.084 11:12:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:12.084 11:12:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:12.084 11:12:53 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:12.084 11:12:53 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:12.084 11:12:53 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:12.084 11:12:53 -- target/multipath.sh@22 -- # local timeout=20 00:09:12.084 11:12:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:12.084 11:12:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:12.084 11:12:53 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:12.084 11:12:53 -- target/multipath.sh@85 -- # echo numa 00:09:12.084 11:12:53 -- target/multipath.sh@88 -- # fio_pid=62092 00:09:12.084 11:12:53 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:12.084 11:12:53 -- target/multipath.sh@90 -- # sleep 1 00:09:12.084 [global] 00:09:12.084 thread=1 00:09:12.084 invalidate=1 00:09:12.084 rw=randrw 00:09:12.084 time_based=1 00:09:12.084 runtime=6 00:09:12.084 ioengine=libaio 00:09:12.084 direct=1 00:09:12.084 bs=4096 00:09:12.084 iodepth=128 00:09:12.084 norandommap=0 00:09:12.084 numjobs=1 00:09:12.084 00:09:12.084 verify_dump=1 00:09:12.084 verify_backlog=512 00:09:12.084 verify_state_save=0 00:09:12.084 do_verify=1 00:09:12.084 verify=crc32c-intel 00:09:12.084 [job0] 00:09:12.084 filename=/dev/nvme0n1 00:09:12.084 Could not set queue depth (nvme0n1) 00:09:12.342 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.342 fio-3.35 00:09:12.342 Starting 1 thread 00:09:13.280 11:12:54 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:13.280 11:12:54 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:13.539 11:12:55 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:13.539 11:12:55 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:13.539 11:12:55 -- target/multipath.sh@22 -- # local timeout=20 00:09:13.539 11:12:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:13.539 11:12:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:13.539 11:12:55 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:13.539 11:12:55 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:13.539 11:12:55 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:13.539 11:12:55 -- target/multipath.sh@22 -- # local timeout=20 00:09:13.539 11:12:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:13.539 11:12:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:13.539 11:12:55 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:13.539 11:12:55 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:13.798 11:12:55 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:14.057 11:12:55 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:14.057 11:12:55 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:14.057 11:12:55 -- target/multipath.sh@22 -- # local timeout=20 00:09:14.057 11:12:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:14.057 11:12:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:14.057 11:12:55 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:14.057 11:12:55 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:14.057 11:12:55 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:14.057 11:12:55 -- target/multipath.sh@22 -- # local timeout=20 00:09:14.057 11:12:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:14.057 11:12:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:14.057 11:12:55 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:14.057 11:12:55 -- target/multipath.sh@104 -- # wait 62092 00:09:19.328 00:09:19.328 job0: (groupid=0, jobs=1): err= 0: pid=62113: Sun Oct 13 11:12:59 2024 00:09:19.328 read: IOPS=10.6k, BW=41.5MiB/s (43.6MB/s)(250MiB/6007msec) 00:09:19.328 slat (usec): min=3, max=7244, avg=54.64, stdev=233.67 00:09:19.328 clat (usec): min=1384, max=15034, avg=8129.93, stdev=1461.36 00:09:19.328 lat (usec): min=1455, max=15044, avg=8184.58, stdev=1466.22 00:09:19.328 clat percentiles (usec): 00:09:19.328 | 1.00th=[ 4228], 5.00th=[ 6063], 10.00th=[ 6783], 20.00th=[ 7242], 00:09:19.328 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8225], 00:09:19.328 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[11338], 00:09:19.328 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13960], 99.95th=[14222], 00:09:19.328 | 99.99th=[15008] 00:09:19.328 bw ( KiB/s): min= 4864, max=27320, per=52.32%, avg=22259.36, stdev=6767.47, samples=11 00:09:19.328 iops : min= 1216, max= 6830, avg=5564.82, stdev=1691.86, samples=11 00:09:19.328 write: IOPS=6231, BW=24.3MiB/s (25.5MB/s)(133MiB/5445msec); 0 zone resets 00:09:19.328 slat (usec): min=14, max=4055, avg=64.82, stdev=163.73 00:09:19.328 clat (usec): min=1314, max=14260, avg=7149.17, stdev=1275.23 00:09:19.328 lat (usec): min=1336, max=14284, avg=7213.99, stdev=1280.33 00:09:19.328 clat percentiles (usec): 00:09:19.328 | 1.00th=[ 3294], 5.00th=[ 4293], 10.00th=[ 5538], 20.00th=[ 6587], 00:09:19.328 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7504], 00:09:19.328 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8586], 00:09:19.328 | 99.00th=[10945], 99.50th=[11600], 99.90th=[12911], 99.95th=[13042], 00:09:19.328 | 99.99th=[13960] 00:09:19.328 bw ( KiB/s): min= 5232, max=26880, per=89.36%, avg=22274.00, stdev=6583.97, samples=11 00:09:19.328 iops : min= 1308, max= 6720, avg=5568.45, stdev=1645.98, samples=11 00:09:19.328 lat (msec) : 2=0.02%, 4=1.65%, 10=92.24%, 20=6.08% 00:09:19.328 cpu : usr=5.31%, sys=21.13%, ctx=5517, majf=0, minf=102 00:09:19.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:19.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.328 issued rwts: total=63892,33930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.328 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.328 00:09:19.328 Run status group 0 (all jobs): 00:09:19.328 READ: bw=41.5MiB/s (43.6MB/s), 41.5MiB/s-41.5MiB/s (43.6MB/s-43.6MB/s), io=250MiB (262MB), run=6007-6007msec 00:09:19.328 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=133MiB (139MB), run=5445-5445msec 00:09:19.328 00:09:19.328 Disk stats (read/write): 00:09:19.328 nvme0n1: ios=62978/33303, merge=0/0, ticks=491776/224152, in_queue=715928, util=98.60% 00:09:19.328 11:12:59 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:19.328 11:13:00 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:19.328 11:13:00 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:19.328 11:13:00 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:19.328 11:13:00 -- target/multipath.sh@22 -- # local timeout=20 00:09:19.328 11:13:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:19.328 11:13:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:19.328 11:13:00 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:19.328 11:13:00 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:19.328 11:13:00 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:19.328 11:13:00 -- target/multipath.sh@22 -- # local timeout=20 00:09:19.328 11:13:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:19.328 11:13:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:19.328 11:13:00 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:19.328 11:13:00 -- target/multipath.sh@113 -- # echo round-robin 00:09:19.328 11:13:00 -- target/multipath.sh@116 -- # fio_pid=62194 00:09:19.328 11:13:00 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:19.328 11:13:00 -- target/multipath.sh@118 -- # sleep 1 00:09:19.328 [global] 00:09:19.328 thread=1 00:09:19.328 invalidate=1 00:09:19.328 rw=randrw 00:09:19.328 time_based=1 00:09:19.328 runtime=6 00:09:19.328 ioengine=libaio 00:09:19.328 direct=1 00:09:19.328 bs=4096 00:09:19.328 iodepth=128 00:09:19.328 norandommap=0 00:09:19.328 numjobs=1 00:09:19.328 00:09:19.328 verify_dump=1 00:09:19.328 verify_backlog=512 00:09:19.328 verify_state_save=0 00:09:19.328 do_verify=1 00:09:19.328 verify=crc32c-intel 00:09:19.328 [job0] 00:09:19.328 filename=/dev/nvme0n1 00:09:19.328 Could not set queue depth (nvme0n1) 00:09:19.328 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:19.328 fio-3.35 00:09:19.328 Starting 1 thread 00:09:19.896 11:13:01 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:20.462 11:13:01 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:20.462 11:13:02 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:20.463 11:13:02 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:20.463 11:13:02 -- target/multipath.sh@22 -- # local timeout=20 00:09:20.463 11:13:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:20.463 11:13:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:20.463 11:13:02 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:20.463 11:13:02 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:20.463 11:13:02 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:20.463 11:13:02 -- target/multipath.sh@22 -- # local timeout=20 00:09:20.463 11:13:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:20.463 11:13:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:20.463 11:13:02 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:20.463 11:13:02 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:20.721 11:13:02 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:21.289 11:13:02 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:21.289 11:13:02 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:21.289 11:13:02 -- target/multipath.sh@22 -- # local timeout=20 00:09:21.289 11:13:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:21.289 11:13:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:21.289 11:13:02 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:21.289 11:13:02 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:21.289 11:13:02 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:21.289 11:13:02 -- target/multipath.sh@22 -- # local timeout=20 00:09:21.289 11:13:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:21.289 11:13:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:21.289 11:13:02 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:21.289 11:13:02 -- target/multipath.sh@132 -- # wait 62194 00:09:25.477 00:09:25.477 job0: (groupid=0, jobs=1): err= 0: pid=62215: Sun Oct 13 11:13:06 2024 00:09:25.477 read: IOPS=11.7k, BW=45.8MiB/s (48.1MB/s)(275MiB/6006msec) 00:09:25.477 slat (usec): min=2, max=6994, avg=41.55, stdev=194.83 00:09:25.477 clat (usec): min=504, max=15170, avg=7369.83, stdev=1854.95 00:09:25.477 lat (usec): min=513, max=15178, avg=7411.38, stdev=1869.63 00:09:25.477 clat percentiles (usec): 00:09:25.477 | 1.00th=[ 2966], 5.00th=[ 3851], 10.00th=[ 4555], 20.00th=[ 5997], 00:09:25.477 | 30.00th=[ 6915], 40.00th=[ 7373], 50.00th=[ 7635], 60.00th=[ 7898], 00:09:25.477 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[10028], 00:09:25.477 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13829], 99.95th=[14091], 00:09:25.477 | 99.99th=[14877] 00:09:25.477 bw ( KiB/s): min= 4960, max=37464, per=54.38%, avg=25528.73, stdev=8295.42, samples=11 00:09:25.477 iops : min= 1240, max= 9368, avg=6382.36, stdev=2074.14, samples=11 00:09:25.477 write: IOPS=7107, BW=27.8MiB/s (29.1MB/s)(149MiB/5375msec); 0 zone resets 00:09:25.477 slat (usec): min=4, max=2300, avg=53.10, stdev=135.12 00:09:25.477 clat (usec): min=501, max=15374, avg=6362.21, stdev=1751.58 00:09:25.477 lat (usec): min=578, max=15397, avg=6415.31, stdev=1766.18 00:09:25.477 clat percentiles (usec): 00:09:25.477 | 1.00th=[ 2540], 5.00th=[ 3163], 10.00th=[ 3621], 20.00th=[ 4424], 00:09:25.477 | 30.00th=[ 5473], 40.00th=[ 6587], 50.00th=[ 6980], 60.00th=[ 7242], 00:09:25.477 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 8029], 95.00th=[ 8356], 00:09:25.477 | 99.00th=[10028], 99.50th=[11338], 99.90th=[12387], 99.95th=[12911], 00:09:25.477 | 99.99th=[15270] 00:09:25.477 bw ( KiB/s): min= 5080, max=36800, per=89.73%, avg=25512.00, stdev=8232.40, samples=11 00:09:25.477 iops : min= 1270, max= 9200, avg=6378.00, stdev=2058.10, samples=11 00:09:25.477 lat (usec) : 750=0.01%, 1000=0.02% 00:09:25.477 lat (msec) : 2=0.19%, 4=8.77%, 10=87.42%, 20=3.59% 00:09:25.477 cpu : usr=6.26%, sys=22.06%, ctx=6052, majf=0, minf=127 00:09:25.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:09:25.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.477 issued rwts: total=70485,38204,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.477 00:09:25.477 Run status group 0 (all jobs): 00:09:25.477 READ: bw=45.8MiB/s (48.1MB/s), 45.8MiB/s-45.8MiB/s (48.1MB/s-48.1MB/s), io=275MiB (289MB), run=6006-6006msec 00:09:25.477 WRITE: bw=27.8MiB/s (29.1MB/s), 27.8MiB/s-27.8MiB/s (29.1MB/s-29.1MB/s), io=149MiB (156MB), run=5375-5375msec 00:09:25.477 00:09:25.477 Disk stats (read/write): 00:09:25.477 nvme0n1: ios=69560/37543, merge=0/0, ticks=489520/223229, in_queue=712749, util=98.62% 00:09:25.477 11:13:06 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:25.477 11:13:06 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.477 11:13:06 -- common/autotest_common.sh@1198 -- # local i=0 00:09:25.477 11:13:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:09:25.477 11:13:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.477 11:13:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:25.477 11:13:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.477 11:13:06 -- common/autotest_common.sh@1210 -- # return 0 00:09:25.477 11:13:06 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.736 11:13:07 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:25.736 11:13:07 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:25.736 11:13:07 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:25.736 11:13:07 -- target/multipath.sh@144 -- # nvmftestfini 00:09:25.736 11:13:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:25.736 11:13:07 -- nvmf/common.sh@116 -- # sync 00:09:25.736 11:13:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:25.736 11:13:07 -- nvmf/common.sh@119 -- # set +e 00:09:25.736 11:13:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:25.736 11:13:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:25.736 rmmod nvme_tcp 00:09:25.736 rmmod nvme_fabrics 00:09:25.736 rmmod nvme_keyring 00:09:25.736 11:13:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:25.736 11:13:07 -- nvmf/common.sh@123 -- # set -e 00:09:25.736 11:13:07 -- nvmf/common.sh@124 -- # return 0 00:09:25.736 11:13:07 -- nvmf/common.sh@477 -- # '[' -n 61997 ']' 00:09:25.736 11:13:07 -- nvmf/common.sh@478 -- # killprocess 61997 00:09:25.736 11:13:07 -- common/autotest_common.sh@926 -- # '[' -z 61997 ']' 00:09:25.736 11:13:07 -- common/autotest_common.sh@930 -- # kill -0 61997 00:09:25.736 11:13:07 -- common/autotest_common.sh@931 -- # uname 00:09:25.736 11:13:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:25.736 11:13:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61997 00:09:25.736 killing process with pid 61997 00:09:25.736 11:13:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:25.736 11:13:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:25.736 11:13:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61997' 00:09:25.736 11:13:07 -- common/autotest_common.sh@945 -- # kill 61997 00:09:25.736 11:13:07 -- common/autotest_common.sh@950 -- # wait 61997 00:09:25.994 11:13:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:25.994 11:13:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:25.994 11:13:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:25.994 11:13:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:25.994 11:13:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:25.994 11:13:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.994 11:13:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.994 11:13:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.994 11:13:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:25.994 00:09:25.994 real 0m19.331s 00:09:25.994 user 1m12.358s 00:09:25.994 sys 0m10.061s 00:09:25.994 11:13:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.994 11:13:07 -- common/autotest_common.sh@10 -- # set +x 00:09:25.994 ************************************ 00:09:25.994 END TEST nvmf_multipath 00:09:25.994 ************************************ 00:09:25.994 11:13:07 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:25.994 11:13:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:25.994 11:13:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:25.994 11:13:07 -- common/autotest_common.sh@10 -- # set +x 00:09:25.994 ************************************ 00:09:25.994 START TEST nvmf_zcopy 00:09:25.994 ************************************ 00:09:25.994 11:13:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:26.252 * Looking for test storage... 00:09:26.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:26.252 11:13:07 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:26.252 11:13:07 -- nvmf/common.sh@7 -- # uname -s 00:09:26.252 11:13:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.252 11:13:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.252 11:13:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.252 11:13:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.252 11:13:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.252 11:13:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.252 11:13:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.252 11:13:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.252 11:13:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.252 11:13:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.252 11:13:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:09:26.252 11:13:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:09:26.252 11:13:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.252 11:13:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.252 11:13:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:26.252 11:13:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.252 11:13:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.253 11:13:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.253 11:13:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.253 11:13:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.253 11:13:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.253 11:13:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.253 11:13:07 -- paths/export.sh@5 -- # export PATH 00:09:26.253 11:13:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.253 11:13:07 -- nvmf/common.sh@46 -- # : 0 00:09:26.253 11:13:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:26.253 11:13:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:26.253 11:13:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:26.253 11:13:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.253 11:13:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.253 11:13:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:26.253 11:13:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:26.253 11:13:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:26.253 11:13:07 -- target/zcopy.sh@12 -- # nvmftestinit 00:09:26.253 11:13:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:26.253 11:13:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.253 11:13:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:26.253 11:13:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:26.253 11:13:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:26.253 11:13:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.253 11:13:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.253 11:13:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.253 11:13:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:26.253 11:13:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:26.253 11:13:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:26.253 11:13:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:26.253 11:13:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:26.253 11:13:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:26.253 11:13:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.253 11:13:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.253 11:13:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:26.253 11:13:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:26.253 11:13:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:26.253 11:13:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:26.253 11:13:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:26.253 11:13:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.253 11:13:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:26.253 11:13:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:26.253 11:13:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:26.253 11:13:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:26.253 11:13:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:26.253 11:13:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:26.253 Cannot find device "nvmf_tgt_br" 00:09:26.253 11:13:07 -- nvmf/common.sh@154 -- # true 00:09:26.253 11:13:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:26.253 Cannot find device "nvmf_tgt_br2" 00:09:26.253 11:13:07 -- nvmf/common.sh@155 -- # true 00:09:26.253 11:13:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:26.253 11:13:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:26.253 Cannot find device "nvmf_tgt_br" 00:09:26.253 11:13:07 -- nvmf/common.sh@157 -- # true 00:09:26.253 11:13:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:26.253 Cannot find device "nvmf_tgt_br2" 00:09:26.253 11:13:07 -- nvmf/common.sh@158 -- # true 00:09:26.253 11:13:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:26.253 11:13:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:26.253 11:13:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:26.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.253 11:13:07 -- nvmf/common.sh@161 -- # true 00:09:26.253 11:13:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:26.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.253 11:13:07 -- nvmf/common.sh@162 -- # true 00:09:26.253 11:13:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:26.253 11:13:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:26.253 11:13:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:26.253 11:13:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:26.253 11:13:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:26.511 11:13:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:26.511 11:13:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:26.511 11:13:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:26.511 11:13:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:26.511 11:13:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:26.511 11:13:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:26.511 11:13:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:26.511 11:13:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:26.511 11:13:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:26.511 11:13:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:26.512 11:13:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:26.512 11:13:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:26.512 11:13:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:26.512 11:13:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:26.512 11:13:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:26.512 11:13:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:26.512 11:13:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:26.512 11:13:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:26.512 11:13:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:26.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:09:26.512 00:09:26.512 --- 10.0.0.2 ping statistics --- 00:09:26.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.512 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:26.512 11:13:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:26.512 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:26.512 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:09:26.512 00:09:26.512 --- 10.0.0.3 ping statistics --- 00:09:26.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.512 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:26.512 11:13:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:26.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:26.512 00:09:26.512 --- 10.0.0.1 ping statistics --- 00:09:26.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.512 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:26.512 11:13:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.512 11:13:08 -- nvmf/common.sh@421 -- # return 0 00:09:26.512 11:13:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:26.512 11:13:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.512 11:13:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:26.512 11:13:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:26.512 11:13:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.512 11:13:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:26.512 11:13:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:26.512 11:13:08 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:26.512 11:13:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:26.512 11:13:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:26.512 11:13:08 -- common/autotest_common.sh@10 -- # set +x 00:09:26.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.512 11:13:08 -- nvmf/common.sh@469 -- # nvmfpid=62462 00:09:26.512 11:13:08 -- nvmf/common.sh@470 -- # waitforlisten 62462 00:09:26.512 11:13:08 -- common/autotest_common.sh@819 -- # '[' -z 62462 ']' 00:09:26.512 11:13:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.512 11:13:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:26.512 11:13:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:26.512 11:13:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.512 11:13:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:26.512 11:13:08 -- common/autotest_common.sh@10 -- # set +x 00:09:26.512 [2024-10-13 11:13:08.089469] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:26.512 [2024-10-13 11:13:08.089563] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.770 [2024-10-13 11:13:08.222743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.770 [2024-10-13 11:13:08.278395] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:26.770 [2024-10-13 11:13:08.278550] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.770 [2024-10-13 11:13:08.278563] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.770 [2024-10-13 11:13:08.278572] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.771 [2024-10-13 11:13:08.278600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.706 11:13:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:27.706 11:13:09 -- common/autotest_common.sh@852 -- # return 0 00:09:27.706 11:13:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:27.706 11:13:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:27.706 11:13:09 -- common/autotest_common.sh@10 -- # set +x 00:09:27.706 11:13:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.706 11:13:09 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:27.706 11:13:09 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:27.706 11:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.706 11:13:09 -- common/autotest_common.sh@10 -- # set +x 00:09:27.706 [2024-10-13 11:13:09.114188] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.706 11:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.706 11:13:09 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:27.706 11:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.706 11:13:09 -- common/autotest_common.sh@10 -- # set +x 00:09:27.706 11:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.706 11:13:09 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.706 11:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.706 11:13:09 -- common/autotest_common.sh@10 -- # set +x 00:09:27.706 [2024-10-13 11:13:09.130285] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.706 11:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.706 11:13:09 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.706 11:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.706 11:13:09 -- common/autotest_common.sh@10 -- # set +x 00:09:27.706 11:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.706 11:13:09 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:27.706 11:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.706 11:13:09 -- common/autotest_common.sh@10 -- # set +x 00:09:27.706 malloc0 00:09:27.706 11:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.706 11:13:09 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:27.706 11:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.706 11:13:09 -- common/autotest_common.sh@10 -- # set +x 00:09:27.706 11:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.706 11:13:09 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:27.706 11:13:09 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:27.706 11:13:09 -- nvmf/common.sh@520 -- # config=() 00:09:27.706 11:13:09 -- nvmf/common.sh@520 -- # local subsystem config 00:09:27.706 11:13:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:27.706 11:13:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:27.706 { 00:09:27.706 "params": { 00:09:27.706 "name": "Nvme$subsystem", 00:09:27.706 "trtype": "$TEST_TRANSPORT", 00:09:27.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:27.706 "adrfam": "ipv4", 00:09:27.706 "trsvcid": "$NVMF_PORT", 00:09:27.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:27.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:27.706 "hdgst": ${hdgst:-false}, 00:09:27.706 "ddgst": ${ddgst:-false} 00:09:27.706 }, 00:09:27.706 "method": "bdev_nvme_attach_controller" 00:09:27.706 } 00:09:27.706 EOF 00:09:27.706 )") 00:09:27.706 11:13:09 -- nvmf/common.sh@542 -- # cat 00:09:27.706 11:13:09 -- nvmf/common.sh@544 -- # jq . 00:09:27.706 11:13:09 -- nvmf/common.sh@545 -- # IFS=, 00:09:27.706 11:13:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:27.706 "params": { 00:09:27.706 "name": "Nvme1", 00:09:27.706 "trtype": "tcp", 00:09:27.706 "traddr": "10.0.0.2", 00:09:27.706 "adrfam": "ipv4", 00:09:27.706 "trsvcid": "4420", 00:09:27.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:27.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:27.706 "hdgst": false, 00:09:27.706 "ddgst": false 00:09:27.706 }, 00:09:27.706 "method": "bdev_nvme_attach_controller" 00:09:27.706 }' 00:09:27.706 [2024-10-13 11:13:09.215002] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:27.706 [2024-10-13 11:13:09.215123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62495 ] 00:09:27.965 [2024-10-13 11:13:09.356087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.965 [2024-10-13 11:13:09.428684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.223 Running I/O for 10 seconds... 00:09:38.203 00:09:38.204 Latency(us) 00:09:38.204 [2024-10-13T11:13:19.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.204 [2024-10-13T11:13:19.806Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:38.204 Verification LBA range: start 0x0 length 0x1000 00:09:38.204 Nvme1n1 : 10.01 10093.95 78.86 0.00 0.00 12647.65 1295.83 20852.36 00:09:38.204 [2024-10-13T11:13:19.806Z] =================================================================================================================== 00:09:38.204 [2024-10-13T11:13:19.806Z] Total : 10093.95 78.86 0.00 0.00 12647.65 1295.83 20852.36 00:09:38.204 11:13:19 -- target/zcopy.sh@39 -- # perfpid=62618 00:09:38.204 11:13:19 -- target/zcopy.sh@41 -- # xtrace_disable 00:09:38.204 11:13:19 -- common/autotest_common.sh@10 -- # set +x 00:09:38.204 11:13:19 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:38.204 11:13:19 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:38.204 11:13:19 -- nvmf/common.sh@520 -- # config=() 00:09:38.204 11:13:19 -- nvmf/common.sh@520 -- # local subsystem config 00:09:38.204 11:13:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:38.204 11:13:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:38.204 { 00:09:38.204 "params": { 00:09:38.204 "name": "Nvme$subsystem", 00:09:38.204 "trtype": "$TEST_TRANSPORT", 00:09:38.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:38.204 "adrfam": "ipv4", 00:09:38.204 "trsvcid": "$NVMF_PORT", 00:09:38.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:38.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:38.204 "hdgst": ${hdgst:-false}, 00:09:38.204 "ddgst": ${ddgst:-false} 00:09:38.204 }, 00:09:38.204 "method": "bdev_nvme_attach_controller" 00:09:38.204 } 00:09:38.204 EOF 00:09:38.204 )") 00:09:38.204 11:13:19 -- nvmf/common.sh@542 -- # cat 00:09:38.204 [2024-10-13 11:13:19.775993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.204 [2024-10-13 11:13:19.776046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.204 11:13:19 -- nvmf/common.sh@544 -- # jq . 00:09:38.204 11:13:19 -- nvmf/common.sh@545 -- # IFS=, 00:09:38.204 11:13:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:38.204 "params": { 00:09:38.204 "name": "Nvme1", 00:09:38.204 "trtype": "tcp", 00:09:38.204 "traddr": "10.0.0.2", 00:09:38.204 "adrfam": "ipv4", 00:09:38.204 "trsvcid": "4420", 00:09:38.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:38.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:38.204 "hdgst": false, 00:09:38.204 "ddgst": false 00:09:38.204 }, 00:09:38.204 "method": "bdev_nvme_attach_controller" 00:09:38.204 }' 00:09:38.204 [2024-10-13 11:13:19.783956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.204 [2024-10-13 11:13:19.783995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.204 [2024-10-13 11:13:19.791962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.204 [2024-10-13 11:13:19.792002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.204 [2024-10-13 11:13:19.799957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.204 [2024-10-13 11:13:19.799994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.811958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.811995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.822669] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:38.463 [2024-10-13 11:13:19.822782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62618 ] 00:09:38.463 [2024-10-13 11:13:19.823975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.824000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.835988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.836036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.847991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.848035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.859986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.860029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.871967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.871987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.883971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.883991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.895991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.896030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.908006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.908046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.919982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.920016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.931985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.932008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.943989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.944010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.955992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.956013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.963376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.463 [2024-10-13 11:13:19.964014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.964036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.976036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.976066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.984016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.984037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:19.992016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:19.992037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:20.000037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:20.000066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:20.008031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:20.008056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:20.016031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:20.016054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:20.025882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.463 [2024-10-13 11:13:20.028031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:20.028054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:20.036046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:20.036068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:20.048059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:20.048093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-10-13 11:13:20.060076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-10-13 11:13:20.060121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.072060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.072091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.084067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.084096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.092085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.092114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.100076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.100101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.108072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.108111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.116081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.116108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.124087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.124113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.132092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.132118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.140094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.140116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.148130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.148158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.156115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.156140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 Running I/O for 5 seconds... 00:09:38.723 [2024-10-13 11:13:20.164119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.164142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.181853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.181884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.196985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.197015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.212934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.212981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.221878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.221907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.236872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.236902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.245305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.245361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.257316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.257371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.266869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.266900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.276949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.276978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.286443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.286474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.297765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.297793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.306166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.306195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-10-13 11:13:20.316791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-10-13 11:13:20.316821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.330575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.330603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.339615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.339643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.349846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.349875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.361268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.361297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.369797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.369825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.382232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.382261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.391647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.391690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.401146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.401174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.410851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.410883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.420603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.420634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.430767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.430801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.445199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.445228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.456443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.456473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.465021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.465049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.477003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.477041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.486447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.486478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.500155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.500192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.508464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.508496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.520223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.520261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.531273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.531301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.539939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.539967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.551348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.551388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.562948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.562978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.982 [2024-10-13 11:13:20.571313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.982 [2024-10-13 11:13:20.571368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.584172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.584201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.593282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.593310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.608501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.608529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.617313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.617369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.629874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.629903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.641095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.641124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.649456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.649485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.665313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.665355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.674693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.674747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.685930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.685959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.696417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.696445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.709899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.709928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.725066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.725111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.743886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.743916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.758241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.758269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.766836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.766865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.779451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.779479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.788507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.788535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.799890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.799919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.808154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.808182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.820014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.820042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.242 [2024-10-13 11:13:20.838360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.242 [2024-10-13 11:13:20.838420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.851975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.852015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.860443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.860472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.872707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.872752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.882123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.882152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.891971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.892000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.905713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.905742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.914177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.914206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.924252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.924281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.933316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.933371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.943030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.943089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.956947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.956975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.965716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.965744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.976011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.976039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.985647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.985676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:20.995317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:20.995411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:21.009201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:21.009229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:21.018167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:21.018211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:21.028463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:21.028491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:21.037845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:21.037874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:21.049649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:21.049678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:21.065637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:21.065666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:21.076538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:21.076566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:21.084723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:21.084767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.502 [2024-10-13 11:13:21.096178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.502 [2024-10-13 11:13:21.096206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.761 [2024-10-13 11:13:21.108005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.761 [2024-10-13 11:13:21.108033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.761 [2024-10-13 11:13:21.124060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.761 [2024-10-13 11:13:21.124088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.761 [2024-10-13 11:13:21.134929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.761 [2024-10-13 11:13:21.134960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.761 [2024-10-13 11:13:21.143920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.761 [2024-10-13 11:13:21.143949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.153820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.153847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.163365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.163406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.176891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.176920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.186257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.186287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.196768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.196796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.209607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.209636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.226118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.226147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.244320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.244360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.259422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.259489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.277020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.277062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.292659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.292721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.301665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.301695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.318099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.318128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.327759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.327789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.341481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.341510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.762 [2024-10-13 11:13:21.349938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.762 [2024-10-13 11:13:21.349967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.361303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.361360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.378799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.378831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.388038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.388066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.397874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.397902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.407819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.407847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.417254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.417283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.426778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.426808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.436434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.436462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.446423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.446452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.457211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.457240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.470685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.470739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.486355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.486395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.497573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.497602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.505615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.505645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.516998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.517026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.528644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.528673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.536793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.536821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.548784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.548812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.560398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.560427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.568499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.568528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.580780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.580809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.591759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.591787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.599839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.599867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.021 [2024-10-13 11:13:21.611397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.021 [2024-10-13 11:13:21.611437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.621400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.621440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.631152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.631192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.645217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.645245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.653849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.653877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.665736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.665765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.676750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.676788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.693617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.693649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.709326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.709366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.727154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.727182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.737593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.737622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.753118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.753148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.769976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.770007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.786058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.786087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.795326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.795383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.807928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.807968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.824051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.824088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.842009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.842038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.857900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.857929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.868922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.868951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.281 [2024-10-13 11:13:21.877707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.281 [2024-10-13 11:13:21.877738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:21.892442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:21.892470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:21.900901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:21.900929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:21.915791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:21.915819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:21.925110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:21.925139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:21.936833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:21.936861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:21.945798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:21.945826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:21.956049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:21.956077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:21.969705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:21.969733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:21.978029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:21.978057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:21.990029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:21.990058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:22.001234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:22.001278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:22.009913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:22.009941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:22.024256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:22.024285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:22.032835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:22.032864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:22.044766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:22.044795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:22.056028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:22.056056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:22.064295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:22.064356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:22.078894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:22.078926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:22.087948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:22.087976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:22.098151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.540 [2024-10-13 11:13:22.098179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.540 [2024-10-13 11:13:22.107741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.541 [2024-10-13 11:13:22.107769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.541 [2024-10-13 11:13:22.119425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.541 [2024-10-13 11:13:22.119453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.541 [2024-10-13 11:13:22.136528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.541 [2024-10-13 11:13:22.136572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.799 [2024-10-13 11:13:22.151377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.151423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.168282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.168359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.182965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.182998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.191265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.191293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.205413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.205441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.213944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.213972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.226882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.226915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.241989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.242018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.259287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.259316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.269497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.269526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.277388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.277416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.289717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.289747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.301031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.301059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.309863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.309892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.321228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.321257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.332271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.332299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.340460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.340520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.355833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.355869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.365889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.365926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.800 [2024-10-13 11:13:22.380926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.800 [2024-10-13 11:13:22.380980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.398837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.398870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.413207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.413238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.429475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.429503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.439511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.439540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.453617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.453647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.462319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.462393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.475587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.475619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.485802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.485831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.496369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.496413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.508758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.508786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.518090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.518118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.530421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.530450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.541669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.541697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.550002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.550030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.561443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.561471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.572380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.572408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.580860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.580888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.592419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.592448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.604062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.604091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.620174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.620203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.629231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.629259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.640540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.640568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.059 [2024-10-13 11:13:22.651743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.059 [2024-10-13 11:13:22.651772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.667086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.667115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.677671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.677714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.693660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.693705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.711238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.711282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.726375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.726434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.737396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.737441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.753693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.753740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.770258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.770303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.787938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.787982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.803776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.803822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.820420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.820473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.838438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.838488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.853577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.853621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.872229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.872292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.886217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.886262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.319 [2024-10-13 11:13:22.901743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.319 [2024-10-13 11:13:22.901787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:22.920270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:22.920333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:22.934687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:22.934755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:22.951216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:22.951260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:22.967577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:22.967621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:22.985063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:22.985108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:22.999833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:22.999877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:23.016382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:23.016428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:23.032005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:23.032033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:23.049136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:23.049192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:23.066885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:23.066915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:23.082195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:23.082241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:23.099924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:23.099969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:23.115765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:23.115810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:23.133605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:23.133650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:23.149186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:23.149230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:23.160122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:23.160167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.578 [2024-10-13 11:13:23.176599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.578 [2024-10-13 11:13:23.176644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.836 [2024-10-13 11:13:23.191023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.836 [2024-10-13 11:13:23.191090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.836 [2024-10-13 11:13:23.206460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.836 [2024-10-13 11:13:23.206504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.836 [2024-10-13 11:13:23.223919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.836 [2024-10-13 11:13:23.223964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.836 [2024-10-13 11:13:23.240834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.836 [2024-10-13 11:13:23.240879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.837 [2024-10-13 11:13:23.256684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.837 [2024-10-13 11:13:23.256730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.837 [2024-10-13 11:13:23.275328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.837 [2024-10-13 11:13:23.275403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.837 [2024-10-13 11:13:23.290546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.837 [2024-10-13 11:13:23.290590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.837 [2024-10-13 11:13:23.307192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.837 [2024-10-13 11:13:23.307236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.837 [2024-10-13 11:13:23.324794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.837 [2024-10-13 11:13:23.324839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.837 [2024-10-13 11:13:23.341115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.837 [2024-10-13 11:13:23.341160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.837 [2024-10-13 11:13:23.357828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.837 [2024-10-13 11:13:23.357872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.837 [2024-10-13 11:13:23.375398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.837 [2024-10-13 11:13:23.375442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.837 [2024-10-13 11:13:23.392525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.837 [2024-10-13 11:13:23.392576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.837 [2024-10-13 11:13:23.409629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.837 [2024-10-13 11:13:23.409674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.837 [2024-10-13 11:13:23.426227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.837 [2024-10-13 11:13:23.426257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.442190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.442222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.460748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.460799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.474987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.475061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.491476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.491508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.506301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.506355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.523566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.523612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.539266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.539310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.556978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.557023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.572888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.572933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.591244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.591289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.606584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.606629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.617934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.617978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.634175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.634220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.650765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.650813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.667624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.667674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.096 [2024-10-13 11:13:23.683995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.096 [2024-10-13 11:13:23.684039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.700547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.700590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.716272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.716317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.733736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.733781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.750119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.750167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.766820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.766867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.782155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.782216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.793214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.793259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.808485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.808531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.826360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.826431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.842012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.842057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.859662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.859692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.874800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.874845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.891328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.891399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.907883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.907926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.925932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.925979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.940737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.940783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.356 [2024-10-13 11:13:23.949896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.356 [2024-10-13 11:13:23.949941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.615 [2024-10-13 11:13:23.966233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.615 [2024-10-13 11:13:23.966278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.615 [2024-10-13 11:13:23.984401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.615 [2024-10-13 11:13:23.984448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.615 [2024-10-13 11:13:24.000086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.615 [2024-10-13 11:13:24.000132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.615 [2024-10-13 11:13:24.016255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.615 [2024-10-13 11:13:24.016300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.615 [2024-10-13 11:13:24.033652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.615 [2024-10-13 11:13:24.033696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.615 [2024-10-13 11:13:24.048131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.615 [2024-10-13 11:13:24.048175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.615 [2024-10-13 11:13:24.064802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.615 [2024-10-13 11:13:24.064847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.615 [2024-10-13 11:13:24.081825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.615 [2024-10-13 11:13:24.081869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.615 [2024-10-13 11:13:24.098550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.615 [2024-10-13 11:13:24.098594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.615 [2024-10-13 11:13:24.115645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.616 [2024-10-13 11:13:24.115688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.616 [2024-10-13 11:13:24.133069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.616 [2024-10-13 11:13:24.133137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.616 [2024-10-13 11:13:24.149761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.616 [2024-10-13 11:13:24.149823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.616 [2024-10-13 11:13:24.166909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.616 [2024-10-13 11:13:24.166954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.616 [2024-10-13 11:13:24.184006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.616 [2024-10-13 11:13:24.184050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.616 [2024-10-13 11:13:24.201034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.616 [2024-10-13 11:13:24.201078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.218556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.218601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.234090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.234134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.250944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.250990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.267385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.267448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.283794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.283838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.301711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.301782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.317217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.317275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.329199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.329243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.345257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.345303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.361816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.361859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.379218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.379278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.394514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.394558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.411812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.411856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.427833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.427877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.443972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.444017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.875 [2024-10-13 11:13:24.461606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.875 [2024-10-13 11:13:24.461652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.134 [2024-10-13 11:13:24.477442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.134 [2024-10-13 11:13:24.477524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.134 [2024-10-13 11:13:24.494536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.134 [2024-10-13 11:13:24.494580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.134 [2024-10-13 11:13:24.510762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.134 [2024-10-13 11:13:24.510808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.134 [2024-10-13 11:13:24.526162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.134 [2024-10-13 11:13:24.526232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.134 [2024-10-13 11:13:24.541745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.134 [2024-10-13 11:13:24.541791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.134 [2024-10-13 11:13:24.560029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.134 [2024-10-13 11:13:24.560074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.134 [2024-10-13 11:13:24.574258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.134 [2024-10-13 11:13:24.574303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.134 [2024-10-13 11:13:24.590104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.134 [2024-10-13 11:13:24.590148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.135 [2024-10-13 11:13:24.607474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.135 [2024-10-13 11:13:24.607502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.135 [2024-10-13 11:13:24.623133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.135 [2024-10-13 11:13:24.623178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.135 [2024-10-13 11:13:24.640683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.135 [2024-10-13 11:13:24.640729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.135 [2024-10-13 11:13:24.656658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.135 [2024-10-13 11:13:24.656703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.135 [2024-10-13 11:13:24.674576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.135 [2024-10-13 11:13:24.674621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.135 [2024-10-13 11:13:24.690150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.135 [2024-10-13 11:13:24.690196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.135 [2024-10-13 11:13:24.708059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.135 [2024-10-13 11:13:24.708105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.135 [2024-10-13 11:13:24.724317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.135 [2024-10-13 11:13:24.724400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.741199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.741245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.757427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.757470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.774312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.774398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.790937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.790991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.807838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.807882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.824797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.824841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.841967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.842012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.859434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.859477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.874971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.875016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.892407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.892452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.908534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.908588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.924420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.924463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.941549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.941597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.957927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.957983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.974284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.974329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.394 [2024-10-13 11:13:24.990856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.394 [2024-10-13 11:13:24.990904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.653 [2024-10-13 11:13:25.007197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.653 [2024-10-13 11:13:25.007242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.653 [2024-10-13 11:13:25.023657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.653 [2024-10-13 11:13:25.023716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.653 [2024-10-13 11:13:25.040642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.653 [2024-10-13 11:13:25.040690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.653 [2024-10-13 11:13:25.057065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.653 [2024-10-13 11:13:25.057132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.653 [2024-10-13 11:13:25.075081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.653 [2024-10-13 11:13:25.075141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.653 [2024-10-13 11:13:25.090557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.653 [2024-10-13 11:13:25.090601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.653 [2024-10-13 11:13:25.099722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.653 [2024-10-13 11:13:25.099767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.653 [2024-10-13 11:13:25.116037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.653 [2024-10-13 11:13:25.116083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.653 [2024-10-13 11:13:25.126987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.653 [2024-10-13 11:13:25.127047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.653 [2024-10-13 11:13:25.142918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.653 [2024-10-13 11:13:25.142964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.653 [2024-10-13 11:13:25.159751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.653 [2024-10-13 11:13:25.159795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.653 00:09:43.653 Latency(us) 00:09:43.653 [2024-10-13T11:13:25.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.653 [2024-10-13T11:13:25.255Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:43.653 Nvme1n1 : 5.01 13022.30 101.74 0.00 0.00 9818.82 3470.43 19303.33 00:09:43.653 [2024-10-13T11:13:25.256Z] =================================================================================================================== 00:09:43.654 [2024-10-13T11:13:25.256Z] Total : 13022.30 101.74 0.00 0.00 9818.82 3470.43 19303.33 00:09:43.654 [2024-10-13 11:13:25.171130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.654 [2024-10-13 11:13:25.171189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.654 [2024-10-13 11:13:25.183143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.654 [2024-10-13 11:13:25.183185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.654 [2024-10-13 11:13:25.195171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.654 [2024-10-13 11:13:25.195224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.654 [2024-10-13 11:13:25.207172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.654 [2024-10-13 11:13:25.207223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.654 [2024-10-13 11:13:25.219186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.654 [2024-10-13 11:13:25.219238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.654 [2024-10-13 11:13:25.231190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.654 [2024-10-13 11:13:25.231244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.654 [2024-10-13 11:13:25.243200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.654 [2024-10-13 11:13:25.243249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.912 [2024-10-13 11:13:25.255218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.912 [2024-10-13 11:13:25.255262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.912 [2024-10-13 11:13:25.267190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.912 [2024-10-13 11:13:25.267211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.912 [2024-10-13 11:13:25.279226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.912 [2024-10-13 11:13:25.279273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.912 [2024-10-13 11:13:25.291281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.912 [2024-10-13 11:13:25.291332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.912 [2024-10-13 11:13:25.303277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.912 [2024-10-13 11:13:25.303322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.912 [2024-10-13 11:13:25.315241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.912 [2024-10-13 11:13:25.315263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.912 [2024-10-13 11:13:25.327210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.912 [2024-10-13 11:13:25.327246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.912 [2024-10-13 11:13:25.339243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.912 [2024-10-13 11:13:25.339293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.912 [2024-10-13 11:13:25.351223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.912 [2024-10-13 11:13:25.351260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.912 [2024-10-13 11:13:25.363219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.912 [2024-10-13 11:13:25.363240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.912 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (62618) - No such process 00:09:43.912 11:13:25 -- target/zcopy.sh@49 -- # wait 62618 00:09:43.912 11:13:25 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.912 11:13:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:43.912 11:13:25 -- common/autotest_common.sh@10 -- # set +x 00:09:43.912 11:13:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:43.912 11:13:25 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:43.912 11:13:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:43.912 11:13:25 -- common/autotest_common.sh@10 -- # set +x 00:09:43.912 delay0 00:09:43.912 11:13:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:43.912 11:13:25 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:43.912 11:13:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:43.912 11:13:25 -- common/autotest_common.sh@10 -- # set +x 00:09:43.912 11:13:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:43.912 11:13:25 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:44.171 [2024-10-13 11:13:25.557137] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:50.753 Initializing NVMe Controllers 00:09:50.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:50.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:50.753 Initialization complete. Launching workers. 00:09:50.753 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 805 00:09:50.753 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1091, failed to submit 34 00:09:50.753 success 985, unsuccess 106, failed 0 00:09:50.753 11:13:31 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:50.753 11:13:31 -- target/zcopy.sh@60 -- # nvmftestfini 00:09:50.753 11:13:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:50.753 11:13:31 -- nvmf/common.sh@116 -- # sync 00:09:50.753 11:13:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:50.753 11:13:31 -- nvmf/common.sh@119 -- # set +e 00:09:50.753 11:13:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:50.753 11:13:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:50.753 rmmod nvme_tcp 00:09:50.753 rmmod nvme_fabrics 00:09:50.753 rmmod nvme_keyring 00:09:50.753 11:13:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:50.753 11:13:31 -- nvmf/common.sh@123 -- # set -e 00:09:50.753 11:13:31 -- nvmf/common.sh@124 -- # return 0 00:09:50.753 11:13:31 -- nvmf/common.sh@477 -- # '[' -n 62462 ']' 00:09:50.753 11:13:31 -- nvmf/common.sh@478 -- # killprocess 62462 00:09:50.753 11:13:31 -- common/autotest_common.sh@926 -- # '[' -z 62462 ']' 00:09:50.753 11:13:31 -- common/autotest_common.sh@930 -- # kill -0 62462 00:09:50.753 11:13:31 -- common/autotest_common.sh@931 -- # uname 00:09:50.753 11:13:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:50.753 11:13:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62462 00:09:50.753 killing process with pid 62462 00:09:50.753 11:13:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:09:50.753 11:13:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:09:50.753 11:13:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62462' 00:09:50.753 11:13:31 -- common/autotest_common.sh@945 -- # kill 62462 00:09:50.753 11:13:31 -- common/autotest_common.sh@950 -- # wait 62462 00:09:50.753 11:13:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:50.753 11:13:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:50.753 11:13:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:50.753 11:13:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.753 11:13:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:50.753 11:13:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.753 11:13:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.753 11:13:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.753 11:13:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:50.753 00:09:50.753 real 0m24.531s 00:09:50.753 user 0m40.353s 00:09:50.753 sys 0m6.452s 00:09:50.753 11:13:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.753 11:13:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.753 ************************************ 00:09:50.753 END TEST nvmf_zcopy 00:09:50.753 ************************************ 00:09:50.753 11:13:32 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:50.753 11:13:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:50.753 11:13:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:50.753 11:13:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.753 ************************************ 00:09:50.753 START TEST nvmf_nmic 00:09:50.753 ************************************ 00:09:50.753 11:13:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:50.753 * Looking for test storage... 00:09:50.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:50.753 11:13:32 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.754 11:13:32 -- nvmf/common.sh@7 -- # uname -s 00:09:50.754 11:13:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.754 11:13:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.754 11:13:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.754 11:13:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.754 11:13:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.754 11:13:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.754 11:13:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.754 11:13:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.754 11:13:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.754 11:13:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.754 11:13:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:09:50.754 11:13:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:09:50.754 11:13:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.754 11:13:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.754 11:13:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:50.754 11:13:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.754 11:13:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.754 11:13:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.754 11:13:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.754 11:13:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.754 11:13:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.754 11:13:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.754 11:13:32 -- paths/export.sh@5 -- # export PATH 00:09:50.754 11:13:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.754 11:13:32 -- nvmf/common.sh@46 -- # : 0 00:09:50.754 11:13:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:50.754 11:13:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:50.754 11:13:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:50.754 11:13:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.754 11:13:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.754 11:13:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:50.754 11:13:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:50.754 11:13:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:50.754 11:13:32 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:50.754 11:13:32 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:50.754 11:13:32 -- target/nmic.sh@14 -- # nvmftestinit 00:09:50.754 11:13:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:50.754 11:13:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.754 11:13:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:50.754 11:13:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:50.754 11:13:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:50.754 11:13:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.754 11:13:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.754 11:13:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.754 11:13:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:50.754 11:13:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:50.754 11:13:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:50.754 11:13:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:50.754 11:13:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:50.754 11:13:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:50.754 11:13:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.754 11:13:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.754 11:13:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:50.754 11:13:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:50.754 11:13:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:50.754 11:13:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:50.754 11:13:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:50.754 11:13:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.754 11:13:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:50.754 11:13:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:50.754 11:13:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:50.754 11:13:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:50.754 11:13:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:50.754 11:13:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:50.754 Cannot find device "nvmf_tgt_br" 00:09:50.754 11:13:32 -- nvmf/common.sh@154 -- # true 00:09:50.754 11:13:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:50.754 Cannot find device "nvmf_tgt_br2" 00:09:50.754 11:13:32 -- nvmf/common.sh@155 -- # true 00:09:50.754 11:13:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:50.754 11:13:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:50.754 Cannot find device "nvmf_tgt_br" 00:09:50.754 11:13:32 -- nvmf/common.sh@157 -- # true 00:09:50.754 11:13:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:50.754 Cannot find device "nvmf_tgt_br2" 00:09:50.754 11:13:32 -- nvmf/common.sh@158 -- # true 00:09:50.754 11:13:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:50.754 11:13:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:51.013 11:13:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:51.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:51.013 11:13:32 -- nvmf/common.sh@161 -- # true 00:09:51.013 11:13:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:51.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:51.013 11:13:32 -- nvmf/common.sh@162 -- # true 00:09:51.013 11:13:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:51.013 11:13:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:51.013 11:13:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:51.013 11:13:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:51.013 11:13:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:51.013 11:13:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:51.013 11:13:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:51.013 11:13:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:51.013 11:13:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:51.013 11:13:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:51.013 11:13:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:51.013 11:13:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:51.013 11:13:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:51.013 11:13:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:51.013 11:13:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:51.013 11:13:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:51.013 11:13:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:51.013 11:13:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:51.013 11:13:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:51.013 11:13:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:51.013 11:13:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:51.013 11:13:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:51.013 11:13:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:51.013 11:13:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:51.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:51.013 00:09:51.013 --- 10.0.0.2 ping statistics --- 00:09:51.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.013 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:51.013 11:13:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:51.013 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:51.013 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:09:51.013 00:09:51.013 --- 10.0.0.3 ping statistics --- 00:09:51.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.013 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:51.013 11:13:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:51.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:51.013 00:09:51.013 --- 10.0.0.1 ping statistics --- 00:09:51.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.013 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:51.013 11:13:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.013 11:13:32 -- nvmf/common.sh@421 -- # return 0 00:09:51.013 11:13:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:51.013 11:13:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.013 11:13:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:51.013 11:13:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:51.013 11:13:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.013 11:13:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:51.013 11:13:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:51.013 11:13:32 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:51.013 11:13:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:51.013 11:13:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:51.013 11:13:32 -- common/autotest_common.sh@10 -- # set +x 00:09:51.013 11:13:32 -- nvmf/common.sh@469 -- # nvmfpid=62934 00:09:51.013 11:13:32 -- nvmf/common.sh@470 -- # waitforlisten 62934 00:09:51.013 11:13:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.013 11:13:32 -- common/autotest_common.sh@819 -- # '[' -z 62934 ']' 00:09:51.013 11:13:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.013 11:13:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:51.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.013 11:13:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.013 11:13:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:51.013 11:13:32 -- common/autotest_common.sh@10 -- # set +x 00:09:51.273 [2024-10-13 11:13:32.653251] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:51.273 [2024-10-13 11:13:32.653374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.273 [2024-10-13 11:13:32.792605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.273 [2024-10-13 11:13:32.846172] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:51.273 [2024-10-13 11:13:32.846317] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.273 [2024-10-13 11:13:32.846361] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.273 [2024-10-13 11:13:32.846370] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.273 [2024-10-13 11:13:32.846485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.273 [2024-10-13 11:13:32.846561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.273 [2024-10-13 11:13:32.847317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.273 [2024-10-13 11:13:32.847343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.209 11:13:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:52.209 11:13:33 -- common/autotest_common.sh@852 -- # return 0 00:09:52.209 11:13:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:52.209 11:13:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:52.209 11:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:52.209 11:13:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.209 11:13:33 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.209 11:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.209 11:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:52.209 [2024-10-13 11:13:33.702667] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.209 11:13:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.209 11:13:33 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:52.209 11:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.209 11:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:52.209 Malloc0 00:09:52.209 11:13:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.209 11:13:33 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:52.209 11:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.209 11:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:52.209 11:13:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.209 11:13:33 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.209 11:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.209 11:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:52.209 11:13:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.209 11:13:33 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.209 11:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.209 11:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:52.209 [2024-10-13 11:13:33.758966] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.209 11:13:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.209 test case1: single bdev can't be used in multiple subsystems 00:09:52.209 11:13:33 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:52.209 11:13:33 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:52.209 11:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.209 11:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:52.209 11:13:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.209 11:13:33 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:52.209 11:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.209 11:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:52.209 11:13:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.209 11:13:33 -- target/nmic.sh@28 -- # nmic_status=0 00:09:52.209 11:13:33 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:52.209 11:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.209 11:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:52.209 [2024-10-13 11:13:33.782843] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:52.209 [2024-10-13 11:13:33.782880] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:52.209 [2024-10-13 11:13:33.782891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.209 request: 00:09:52.209 { 00:09:52.209 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:52.209 "namespace": { 00:09:52.209 "bdev_name": "Malloc0" 00:09:52.209 }, 00:09:52.209 "method": "nvmf_subsystem_add_ns", 00:09:52.209 "req_id": 1 00:09:52.209 } 00:09:52.209 Got JSON-RPC error response 00:09:52.209 response: 00:09:52.209 { 00:09:52.209 "code": -32602, 00:09:52.209 "message": "Invalid parameters" 00:09:52.209 } 00:09:52.209 11:13:33 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:09:52.209 11:13:33 -- target/nmic.sh@29 -- # nmic_status=1 00:09:52.209 11:13:33 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:52.209 Adding namespace failed - expected result. 00:09:52.209 11:13:33 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:52.209 test case2: host connect to nvmf target in multiple paths 00:09:52.209 11:13:33 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:52.209 11:13:33 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:52.209 11:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.209 11:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:52.209 [2024-10-13 11:13:33.794952] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:52.209 11:13:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.209 11:13:33 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:52.468 11:13:33 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:52.468 11:13:34 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:52.468 11:13:34 -- common/autotest_common.sh@1177 -- # local i=0 00:09:52.468 11:13:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:09:52.468 11:13:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:09:52.468 11:13:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:09:55.004 11:13:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:09:55.004 11:13:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:09:55.004 11:13:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:09:55.004 11:13:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:09:55.004 11:13:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:09:55.004 11:13:36 -- common/autotest_common.sh@1187 -- # return 0 00:09:55.004 11:13:36 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:55.004 [global] 00:09:55.004 thread=1 00:09:55.004 invalidate=1 00:09:55.004 rw=write 00:09:55.004 time_based=1 00:09:55.004 runtime=1 00:09:55.004 ioengine=libaio 00:09:55.004 direct=1 00:09:55.004 bs=4096 00:09:55.004 iodepth=1 00:09:55.004 norandommap=0 00:09:55.004 numjobs=1 00:09:55.004 00:09:55.004 verify_dump=1 00:09:55.004 verify_backlog=512 00:09:55.004 verify_state_save=0 00:09:55.004 do_verify=1 00:09:55.004 verify=crc32c-intel 00:09:55.004 [job0] 00:09:55.004 filename=/dev/nvme0n1 00:09:55.004 Could not set queue depth (nvme0n1) 00:09:55.004 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.004 fio-3.35 00:09:55.004 Starting 1 thread 00:09:55.941 00:09:55.941 job0: (groupid=0, jobs=1): err= 0: pid=63026: Sun Oct 13 11:13:37 2024 00:09:55.941 read: IOPS=2845, BW=11.1MiB/s (11.7MB/s)(11.1MiB/1001msec) 00:09:55.941 slat (nsec): min=10684, max=63373, avg=13404.68, stdev=4407.05 00:09:55.941 clat (usec): min=134, max=662, avg=187.30, stdev=27.09 00:09:55.941 lat (usec): min=145, max=676, avg=200.71, stdev=27.69 00:09:55.941 clat percentiles (usec): 00:09:55.941 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 165], 00:09:55.941 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 192], 00:09:55.941 | 70.00th=[ 198], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 231], 00:09:55.941 | 99.00th=[ 253], 99.50th=[ 273], 99.90th=[ 408], 99.95th=[ 449], 00:09:55.941 | 99.99th=[ 660] 00:09:55.941 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:55.941 slat (usec): min=15, max=106, avg=21.15, stdev= 7.75 00:09:55.941 clat (usec): min=74, max=298, avg=115.17, stdev=19.05 00:09:55.941 lat (usec): min=99, max=322, avg=136.32, stdev=21.66 00:09:55.941 clat percentiles (usec): 00:09:55.941 | 1.00th=[ 86], 5.00th=[ 92], 10.00th=[ 96], 20.00th=[ 100], 00:09:55.941 | 30.00th=[ 104], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 116], 00:09:55.941 | 70.00th=[ 122], 80.00th=[ 130], 90.00th=[ 143], 95.00th=[ 151], 00:09:55.941 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 200], 99.95th=[ 293], 00:09:55.941 | 99.99th=[ 297] 00:09:55.941 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:55.941 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:55.941 lat (usec) : 100=10.05%, 250=89.34%, 500=0.59%, 750=0.02% 00:09:55.941 cpu : usr=2.80%, sys=7.20%, ctx=5923, majf=0, minf=5 00:09:55.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.941 issued rwts: total=2848,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.941 00:09:55.941 Run status group 0 (all jobs): 00:09:55.941 READ: bw=11.1MiB/s (11.7MB/s), 11.1MiB/s-11.1MiB/s (11.7MB/s-11.7MB/s), io=11.1MiB (11.7MB), run=1001-1001msec 00:09:55.941 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:55.941 00:09:55.941 Disk stats (read/write): 00:09:55.941 nvme0n1: ios=2610/2751, merge=0/0, ticks=531/366, in_queue=897, util=91.38% 00:09:55.941 11:13:37 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:55.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:55.941 11:13:37 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:55.941 11:13:37 -- common/autotest_common.sh@1198 -- # local i=0 00:09:55.941 11:13:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:09:55.941 11:13:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.941 11:13:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:55.941 11:13:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.941 11:13:37 -- common/autotest_common.sh@1210 -- # return 0 00:09:55.941 11:13:37 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:55.941 11:13:37 -- target/nmic.sh@53 -- # nvmftestfini 00:09:55.941 11:13:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:55.941 11:13:37 -- nvmf/common.sh@116 -- # sync 00:09:55.941 11:13:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:55.941 11:13:37 -- nvmf/common.sh@119 -- # set +e 00:09:55.941 11:13:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:55.941 11:13:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:55.941 rmmod nvme_tcp 00:09:55.941 rmmod nvme_fabrics 00:09:56.200 rmmod nvme_keyring 00:09:56.200 11:13:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:56.200 11:13:37 -- nvmf/common.sh@123 -- # set -e 00:09:56.200 11:13:37 -- nvmf/common.sh@124 -- # return 0 00:09:56.200 11:13:37 -- nvmf/common.sh@477 -- # '[' -n 62934 ']' 00:09:56.200 11:13:37 -- nvmf/common.sh@478 -- # killprocess 62934 00:09:56.200 11:13:37 -- common/autotest_common.sh@926 -- # '[' -z 62934 ']' 00:09:56.200 11:13:37 -- common/autotest_common.sh@930 -- # kill -0 62934 00:09:56.200 11:13:37 -- common/autotest_common.sh@931 -- # uname 00:09:56.200 11:13:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:56.200 11:13:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62934 00:09:56.200 11:13:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:56.200 11:13:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:56.200 killing process with pid 62934 00:09:56.200 11:13:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62934' 00:09:56.200 11:13:37 -- common/autotest_common.sh@945 -- # kill 62934 00:09:56.200 11:13:37 -- common/autotest_common.sh@950 -- # wait 62934 00:09:56.200 11:13:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:56.200 11:13:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:56.200 11:13:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:56.200 11:13:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.200 11:13:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:56.200 11:13:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.200 11:13:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.200 11:13:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.485 11:13:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:56.485 00:09:56.485 real 0m5.720s 00:09:56.485 user 0m18.436s 00:09:56.485 sys 0m2.232s 00:09:56.485 11:13:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.485 11:13:37 -- common/autotest_common.sh@10 -- # set +x 00:09:56.485 ************************************ 00:09:56.485 END TEST nvmf_nmic 00:09:56.485 ************************************ 00:09:56.485 11:13:37 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:56.485 11:13:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:56.485 11:13:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:56.485 11:13:37 -- common/autotest_common.sh@10 -- # set +x 00:09:56.485 ************************************ 00:09:56.485 START TEST nvmf_fio_target 00:09:56.485 ************************************ 00:09:56.485 11:13:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:56.485 * Looking for test storage... 00:09:56.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:56.485 11:13:37 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.485 11:13:37 -- nvmf/common.sh@7 -- # uname -s 00:09:56.485 11:13:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.485 11:13:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.485 11:13:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.485 11:13:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.485 11:13:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.485 11:13:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.485 11:13:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.485 11:13:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.485 11:13:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.485 11:13:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.485 11:13:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:09:56.485 11:13:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:09:56.485 11:13:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.485 11:13:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.485 11:13:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:56.485 11:13:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.485 11:13:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.485 11:13:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.485 11:13:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.485 11:13:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.485 11:13:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.486 11:13:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.486 11:13:37 -- paths/export.sh@5 -- # export PATH 00:09:56.486 11:13:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.486 11:13:37 -- nvmf/common.sh@46 -- # : 0 00:09:56.486 11:13:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:56.486 11:13:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:56.486 11:13:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:56.486 11:13:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.486 11:13:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.486 11:13:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:56.486 11:13:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:56.486 11:13:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:56.486 11:13:37 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.486 11:13:37 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.486 11:13:37 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.486 11:13:37 -- target/fio.sh@16 -- # nvmftestinit 00:09:56.486 11:13:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:56.486 11:13:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.486 11:13:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:56.486 11:13:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:56.486 11:13:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:56.486 11:13:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.486 11:13:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.486 11:13:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.486 11:13:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:56.486 11:13:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:56.486 11:13:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:56.486 11:13:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:56.486 11:13:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:56.486 11:13:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:56.486 11:13:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.486 11:13:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.486 11:13:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:56.486 11:13:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:56.486 11:13:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:56.486 11:13:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:56.486 11:13:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:56.486 11:13:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.486 11:13:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:56.486 11:13:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:56.486 11:13:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:56.486 11:13:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:56.486 11:13:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:56.486 11:13:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:56.486 Cannot find device "nvmf_tgt_br" 00:09:56.486 11:13:38 -- nvmf/common.sh@154 -- # true 00:09:56.486 11:13:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.486 Cannot find device "nvmf_tgt_br2" 00:09:56.486 11:13:38 -- nvmf/common.sh@155 -- # true 00:09:56.486 11:13:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:56.486 11:13:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:56.486 Cannot find device "nvmf_tgt_br" 00:09:56.486 11:13:38 -- nvmf/common.sh@157 -- # true 00:09:56.486 11:13:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:56.486 Cannot find device "nvmf_tgt_br2" 00:09:56.486 11:13:38 -- nvmf/common.sh@158 -- # true 00:09:56.486 11:13:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:56.747 11:13:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:56.747 11:13:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:56.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.747 11:13:38 -- nvmf/common.sh@161 -- # true 00:09:56.747 11:13:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:56.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.747 11:13:38 -- nvmf/common.sh@162 -- # true 00:09:56.747 11:13:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:56.747 11:13:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:56.747 11:13:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:56.747 11:13:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:56.747 11:13:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:56.747 11:13:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:56.747 11:13:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:56.747 11:13:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:56.747 11:13:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:56.747 11:13:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:56.747 11:13:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:56.747 11:13:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:56.747 11:13:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:56.747 11:13:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:56.747 11:13:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:56.747 11:13:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:56.747 11:13:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:56.747 11:13:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:56.747 11:13:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:56.747 11:13:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:56.747 11:13:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:56.747 11:13:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:56.747 11:13:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:56.747 11:13:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:56.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:09:56.747 00:09:56.747 --- 10.0.0.2 ping statistics --- 00:09:56.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.747 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:56.747 11:13:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:56.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:56.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:09:56.747 00:09:56.747 --- 10.0.0.3 ping statistics --- 00:09:56.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.747 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:56.747 11:13:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:56.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:56.747 00:09:56.747 --- 10.0.0.1 ping statistics --- 00:09:56.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.747 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:56.747 11:13:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.747 11:13:38 -- nvmf/common.sh@421 -- # return 0 00:09:56.747 11:13:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:56.747 11:13:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.747 11:13:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:56.747 11:13:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:56.747 11:13:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.747 11:13:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:56.747 11:13:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:56.747 11:13:38 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:56.747 11:13:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:56.747 11:13:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:56.747 11:13:38 -- common/autotest_common.sh@10 -- # set +x 00:09:56.747 11:13:38 -- nvmf/common.sh@469 -- # nvmfpid=63202 00:09:56.747 11:13:38 -- nvmf/common.sh@470 -- # waitforlisten 63202 00:09:56.747 11:13:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:56.747 11:13:38 -- common/autotest_common.sh@819 -- # '[' -z 63202 ']' 00:09:56.747 11:13:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.747 11:13:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:56.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.747 11:13:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.747 11:13:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:56.747 11:13:38 -- common/autotest_common.sh@10 -- # set +x 00:09:57.005 [2024-10-13 11:13:38.382489] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:57.005 [2024-10-13 11:13:38.382577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.005 [2024-10-13 11:13:38.521249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.005 [2024-10-13 11:13:38.578803] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:57.005 [2024-10-13 11:13:38.578981] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.005 [2024-10-13 11:13:38.578995] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.005 [2024-10-13 11:13:38.579004] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.005 [2024-10-13 11:13:38.579378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.005 [2024-10-13 11:13:38.579487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.005 [2024-10-13 11:13:38.579943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.005 [2024-10-13 11:13:38.579993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.940 11:13:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:57.940 11:13:39 -- common/autotest_common.sh@852 -- # return 0 00:09:57.940 11:13:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:57.940 11:13:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:57.940 11:13:39 -- common/autotest_common.sh@10 -- # set +x 00:09:57.940 11:13:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.940 11:13:39 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:58.198 [2024-10-13 11:13:39.627009] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.198 11:13:39 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.456 11:13:39 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:58.456 11:13:39 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.716 11:13:40 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:58.716 11:13:40 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.978 11:13:40 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:58.978 11:13:40 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:59.236 11:13:40 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:59.236 11:13:40 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:59.496 11:13:40 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:59.754 11:13:41 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:59.754 11:13:41 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.013 11:13:41 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:00.013 11:13:41 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.271 11:13:41 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:00.271 11:13:41 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:00.529 11:13:42 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:00.788 11:13:42 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:00.788 11:13:42 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.046 11:13:42 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:01.046 11:13:42 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:01.304 11:13:42 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.562 [2024-10-13 11:13:43.015994] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.562 11:13:43 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:01.820 11:13:43 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:02.079 11:13:43 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:02.079 11:13:43 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:02.079 11:13:43 -- common/autotest_common.sh@1177 -- # local i=0 00:10:02.079 11:13:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:02.079 11:13:43 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:10:02.079 11:13:43 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:10:02.079 11:13:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:10:04.609 11:13:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:10:04.609 11:13:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:10:04.609 11:13:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:10:04.609 11:13:45 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:10:04.609 11:13:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:10:04.609 11:13:45 -- common/autotest_common.sh@1187 -- # return 0 00:10:04.609 11:13:45 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:04.609 [global] 00:10:04.609 thread=1 00:10:04.609 invalidate=1 00:10:04.609 rw=write 00:10:04.609 time_based=1 00:10:04.609 runtime=1 00:10:04.609 ioengine=libaio 00:10:04.609 direct=1 00:10:04.609 bs=4096 00:10:04.609 iodepth=1 00:10:04.609 norandommap=0 00:10:04.609 numjobs=1 00:10:04.609 00:10:04.609 verify_dump=1 00:10:04.609 verify_backlog=512 00:10:04.609 verify_state_save=0 00:10:04.609 do_verify=1 00:10:04.609 verify=crc32c-intel 00:10:04.609 [job0] 00:10:04.609 filename=/dev/nvme0n1 00:10:04.609 [job1] 00:10:04.609 filename=/dev/nvme0n2 00:10:04.609 [job2] 00:10:04.609 filename=/dev/nvme0n3 00:10:04.609 [job3] 00:10:04.609 filename=/dev/nvme0n4 00:10:04.609 Could not set queue depth (nvme0n1) 00:10:04.609 Could not set queue depth (nvme0n2) 00:10:04.609 Could not set queue depth (nvme0n3) 00:10:04.609 Could not set queue depth (nvme0n4) 00:10:04.609 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.609 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.609 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.609 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.609 fio-3.35 00:10:04.609 Starting 4 threads 00:10:05.545 00:10:05.545 job0: (groupid=0, jobs=1): err= 0: pid=63387: Sun Oct 13 11:13:47 2024 00:10:05.545 read: IOPS=2934, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec) 00:10:05.545 slat (nsec): min=11650, max=46117, avg=14120.03, stdev=2980.19 00:10:05.545 clat (usec): min=129, max=2585, avg=167.89, stdev=50.66 00:10:05.545 lat (usec): min=142, max=2598, avg=182.01, stdev=50.77 00:10:05.545 clat percentiles (usec): 00:10:05.545 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 155], 00:10:05.545 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:10:05.545 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 190], 00:10:05.545 | 99.00th=[ 202], 99.50th=[ 210], 99.90th=[ 553], 99.95th=[ 1156], 00:10:05.545 | 99.99th=[ 2573] 00:10:05.545 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:05.545 slat (nsec): min=13913, max=96433, avg=20780.94, stdev=4053.68 00:10:05.545 clat (usec): min=93, max=237, avg=127.64, stdev=13.71 00:10:05.545 lat (usec): min=112, max=333, avg=148.43, stdev=14.25 00:10:05.545 clat percentiles (usec): 00:10:05.545 | 1.00th=[ 102], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 117], 00:10:05.545 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 130], 00:10:05.545 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 153], 00:10:05.545 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 198], 99.95th=[ 231], 00:10:05.545 | 99.99th=[ 237] 00:10:05.545 bw ( KiB/s): min=12288, max=12288, per=30.09%, avg=12288.00, stdev= 0.00, samples=1 00:10:05.545 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:05.545 lat (usec) : 100=0.32%, 250=99.63%, 750=0.02% 00:10:05.545 lat (msec) : 2=0.02%, 4=0.02% 00:10:05.545 cpu : usr=1.90%, sys=8.50%, ctx=6009, majf=0, minf=15 00:10:05.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.545 issued rwts: total=2937,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.545 job1: (groupid=0, jobs=1): err= 0: pid=63388: Sun Oct 13 11:13:47 2024 00:10:05.545 read: IOPS=1863, BW=7453KiB/s (7631kB/s)(7460KiB/1001msec) 00:10:05.545 slat (nsec): min=11780, max=50974, avg=14810.83, stdev=3857.06 00:10:05.545 clat (usec): min=162, max=536, avg=269.59, stdev=29.49 00:10:05.545 lat (usec): min=175, max=551, avg=284.40, stdev=30.36 00:10:05.545 clat percentiles (usec): 00:10:05.545 | 1.00th=[ 223], 5.00th=[ 237], 10.00th=[ 245], 20.00th=[ 253], 00:10:05.545 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 273], 00:10:05.545 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 306], 00:10:05.545 | 99.00th=[ 420], 99.50th=[ 441], 99.90th=[ 482], 99.95th=[ 537], 00:10:05.545 | 99.99th=[ 537] 00:10:05.545 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:05.545 slat (usec): min=15, max=131, avg=21.54, stdev= 5.17 00:10:05.545 clat (usec): min=107, max=820, avg=204.55, stdev=29.39 00:10:05.545 lat (usec): min=125, max=855, avg=226.09, stdev=30.79 00:10:05.545 clat percentiles (usec): 00:10:05.545 | 1.00th=[ 120], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 188], 00:10:05.545 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 208], 00:10:05.545 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 241], 00:10:05.545 | 99.00th=[ 318], 99.50th=[ 347], 99.90th=[ 379], 99.95th=[ 379], 00:10:05.545 | 99.99th=[ 824] 00:10:05.545 bw ( KiB/s): min= 8192, max= 8192, per=20.06%, avg=8192.00, stdev= 0.00, samples=1 00:10:05.545 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:05.545 lat (usec) : 250=59.24%, 500=40.71%, 750=0.03%, 1000=0.03% 00:10:05.545 cpu : usr=1.70%, sys=5.30%, ctx=3914, majf=0, minf=9 00:10:05.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.545 issued rwts: total=1865,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.545 job2: (groupid=0, jobs=1): err= 0: pid=63389: Sun Oct 13 11:13:47 2024 00:10:05.545 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:05.545 slat (nsec): min=11574, max=40136, avg=14459.22, stdev=2657.68 00:10:05.545 clat (usec): min=148, max=306, avg=183.03, stdev=14.99 00:10:05.545 lat (usec): min=160, max=321, avg=197.49, stdev=14.88 00:10:05.545 clat percentiles (usec): 00:10:05.545 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:10:05.545 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:10:05.545 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 210], 00:10:05.545 | 99.00th=[ 225], 99.50th=[ 231], 99.90th=[ 245], 99.95th=[ 247], 00:10:05.545 | 99.99th=[ 306] 00:10:05.545 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec); 0 zone resets 00:10:05.545 slat (usec): min=14, max=126, avg=21.74, stdev= 5.60 00:10:05.545 clat (usec): min=103, max=1586, avg=137.34, stdev=30.08 00:10:05.545 lat (usec): min=124, max=1633, avg=159.08, stdev=30.94 00:10:05.545 clat percentiles (usec): 00:10:05.545 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 125], 00:10:05.545 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:10:05.545 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 163], 00:10:05.545 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 206], 99.95th=[ 359], 00:10:05.545 | 99.99th=[ 1582] 00:10:05.546 bw ( KiB/s): min=12288, max=12288, per=30.09%, avg=12288.00, stdev= 0.00, samples=1 00:10:05.546 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:05.546 lat (usec) : 250=99.95%, 500=0.04% 00:10:05.546 lat (msec) : 2=0.02% 00:10:05.546 cpu : usr=2.50%, sys=7.50%, ctx=5614, majf=0, minf=5 00:10:05.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.546 issued rwts: total=2560,3051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.546 job3: (groupid=0, jobs=1): err= 0: pid=63390: Sun Oct 13 11:13:47 2024 00:10:05.546 read: IOPS=1871, BW=7485KiB/s (7664kB/s)(7492KiB/1001msec) 00:10:05.546 slat (nsec): min=11110, max=43354, avg=14283.94, stdev=3113.39 00:10:05.546 clat (usec): min=168, max=1082, avg=270.53, stdev=35.63 00:10:05.546 lat (usec): min=179, max=1097, avg=284.81, stdev=35.80 00:10:05.546 clat percentiles (usec): 00:10:05.546 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:10:05.546 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:10:05.546 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 310], 00:10:05.546 | 99.00th=[ 400], 99.50th=[ 482], 99.90th=[ 537], 99.95th=[ 1090], 00:10:05.546 | 99.99th=[ 1090] 00:10:05.546 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:05.546 slat (nsec): min=17227, max=91433, avg=21798.68, stdev=4876.07 00:10:05.546 clat (usec): min=104, max=295, avg=202.74, stdev=21.49 00:10:05.546 lat (usec): min=123, max=377, avg=224.54, stdev=22.31 00:10:05.546 clat percentiles (usec): 00:10:05.546 | 1.00th=[ 129], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 188], 00:10:05.546 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:10:05.546 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 241], 00:10:05.546 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 265], 99.95th=[ 269], 00:10:05.546 | 99.99th=[ 297] 00:10:05.546 bw ( KiB/s): min= 8192, max= 8192, per=20.06%, avg=8192.00, stdev= 0.00, samples=1 00:10:05.546 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:05.546 lat (usec) : 250=59.60%, 500=40.27%, 750=0.10% 00:10:05.546 lat (msec) : 2=0.03% 00:10:05.546 cpu : usr=1.80%, sys=5.30%, ctx=3923, majf=0, minf=7 00:10:05.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.546 issued rwts: total=1873,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.546 00:10:05.546 Run status group 0 (all jobs): 00:10:05.546 READ: bw=36.0MiB/s (37.8MB/s), 7453KiB/s-11.5MiB/s (7631kB/s-12.0MB/s), io=36.1MiB (37.8MB), run=1001-1001msec 00:10:05.546 WRITE: bw=39.9MiB/s (41.8MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.9MiB (41.9MB), run=1001-1001msec 00:10:05.546 00:10:05.546 Disk stats (read/write): 00:10:05.546 nvme0n1: ios=2610/2586, merge=0/0, ticks=466/352, in_queue=818, util=87.58% 00:10:05.546 nvme0n2: ios=1568/1831, merge=0/0, ticks=445/386, in_queue=831, util=88.31% 00:10:05.546 nvme0n3: ios=2235/2560, merge=0/0, ticks=426/368, in_queue=794, util=89.20% 00:10:05.546 nvme0n4: ios=1536/1854, merge=0/0, ticks=421/391, in_queue=812, util=89.76% 00:10:05.546 11:13:47 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:05.546 [global] 00:10:05.546 thread=1 00:10:05.546 invalidate=1 00:10:05.546 rw=randwrite 00:10:05.546 time_based=1 00:10:05.546 runtime=1 00:10:05.546 ioengine=libaio 00:10:05.546 direct=1 00:10:05.546 bs=4096 00:10:05.546 iodepth=1 00:10:05.546 norandommap=0 00:10:05.546 numjobs=1 00:10:05.546 00:10:05.546 verify_dump=1 00:10:05.546 verify_backlog=512 00:10:05.546 verify_state_save=0 00:10:05.546 do_verify=1 00:10:05.546 verify=crc32c-intel 00:10:05.546 [job0] 00:10:05.546 filename=/dev/nvme0n1 00:10:05.546 [job1] 00:10:05.546 filename=/dev/nvme0n2 00:10:05.546 [job2] 00:10:05.546 filename=/dev/nvme0n3 00:10:05.546 [job3] 00:10:05.546 filename=/dev/nvme0n4 00:10:05.546 Could not set queue depth (nvme0n1) 00:10:05.546 Could not set queue depth (nvme0n2) 00:10:05.546 Could not set queue depth (nvme0n3) 00:10:05.546 Could not set queue depth (nvme0n4) 00:10:05.804 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.804 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.804 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.804 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.804 fio-3.35 00:10:05.804 Starting 4 threads 00:10:07.179 00:10:07.179 job0: (groupid=0, jobs=1): err= 0: pid=63449: Sun Oct 13 11:13:48 2024 00:10:07.180 read: IOPS=2920, BW=11.4MiB/s (12.0MB/s)(11.4MiB/1001msec) 00:10:07.180 slat (nsec): min=10822, max=60019, avg=12985.99, stdev=2744.84 00:10:07.180 clat (usec): min=130, max=3628, avg=171.73, stdev=111.79 00:10:07.180 lat (usec): min=142, max=3688, avg=184.71, stdev=113.35 00:10:07.180 clat percentiles (usec): 00:10:07.180 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:10:07.180 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:10:07.180 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 188], 00:10:07.180 | 99.00th=[ 200], 99.50th=[ 293], 99.90th=[ 2704], 99.95th=[ 3556], 00:10:07.180 | 99.99th=[ 3621] 00:10:07.180 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:07.180 slat (nsec): min=14422, max=81868, avg=19529.43, stdev=2869.52 00:10:07.180 clat (usec): min=91, max=262, avg=127.20, stdev=11.27 00:10:07.180 lat (usec): min=108, max=344, avg=146.72, stdev=11.71 00:10:07.180 clat percentiles (usec): 00:10:07.180 | 1.00th=[ 104], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 119], 00:10:07.180 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 130], 00:10:07.180 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 147], 00:10:07.180 | 99.00th=[ 157], 99.50th=[ 159], 99.90th=[ 176], 99.95th=[ 190], 00:10:07.180 | 99.99th=[ 265] 00:10:07.180 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:07.180 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:07.180 lat (usec) : 100=0.08%, 250=99.65%, 500=0.12%, 750=0.05% 00:10:07.180 lat (msec) : 2=0.05%, 4=0.05% 00:10:07.180 cpu : usr=2.60%, sys=7.30%, ctx=5995, majf=0, minf=15 00:10:07.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.180 issued rwts: total=2923,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.180 job1: (groupid=0, jobs=1): err= 0: pid=63450: Sun Oct 13 11:13:48 2024 00:10:07.180 read: IOPS=1920, BW=7680KiB/s (7865kB/s)(7688KiB/1001msec) 00:10:07.180 slat (nsec): min=11731, max=33706, avg=13735.98, stdev=2566.28 00:10:07.180 clat (usec): min=168, max=538, avg=268.81, stdev=31.92 00:10:07.180 lat (usec): min=183, max=553, avg=282.55, stdev=33.13 00:10:07.180 clat percentiles (usec): 00:10:07.180 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 253], 00:10:07.180 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:10:07.180 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:10:07.180 | 99.00th=[ 416], 99.50th=[ 486], 99.90th=[ 529], 99.95th=[ 537], 00:10:07.180 | 99.99th=[ 537] 00:10:07.180 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:07.180 slat (nsec): min=15730, max=97226, avg=19409.29, stdev=3513.08 00:10:07.180 clat (usec): min=97, max=2545, avg=200.77, stdev=58.14 00:10:07.180 lat (usec): min=119, max=2568, avg=220.18, stdev=58.63 00:10:07.180 clat percentiles (usec): 00:10:07.180 | 1.00th=[ 112], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 188], 00:10:07.180 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 204], 00:10:07.180 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 227], 00:10:07.180 | 99.00th=[ 293], 99.50th=[ 338], 99.90th=[ 433], 99.95th=[ 469], 00:10:07.180 | 99.99th=[ 2540] 00:10:07.180 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:07.180 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:07.180 lat (usec) : 100=0.03%, 250=58.61%, 500=41.18%, 750=0.15% 00:10:07.180 lat (msec) : 4=0.03% 00:10:07.180 cpu : usr=1.40%, sys=5.10%, ctx=3971, majf=0, minf=11 00:10:07.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.180 issued rwts: total=1922,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.180 job2: (groupid=0, jobs=1): err= 0: pid=63451: Sun Oct 13 11:13:48 2024 00:10:07.180 read: IOPS=1935, BW=7740KiB/s (7926kB/s)(7748KiB/1001msec) 00:10:07.180 slat (nsec): min=11613, max=30610, avg=13425.09, stdev=1913.53 00:10:07.180 clat (usec): min=178, max=520, avg=267.72, stdev=30.29 00:10:07.180 lat (usec): min=195, max=537, avg=281.14, stdev=30.88 00:10:07.180 clat percentiles (usec): 00:10:07.180 | 1.00th=[ 227], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 253], 00:10:07.180 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:10:07.180 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 302], 00:10:07.180 | 99.00th=[ 449], 99.50th=[ 469], 99.90th=[ 506], 99.95th=[ 523], 00:10:07.180 | 99.99th=[ 523] 00:10:07.180 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:07.180 slat (nsec): min=17041, max=75382, avg=19644.83, stdev=3198.94 00:10:07.180 clat (usec): min=102, max=571, avg=199.66, stdev=23.03 00:10:07.180 lat (usec): min=121, max=607, avg=219.31, stdev=23.75 00:10:07.180 clat percentiles (usec): 00:10:07.180 | 1.00th=[ 124], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 188], 00:10:07.180 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:10:07.180 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 229], 00:10:07.180 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 383], 99.95th=[ 416], 00:10:07.180 | 99.99th=[ 570] 00:10:07.180 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:07.180 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:07.180 lat (usec) : 250=59.02%, 500=40.90%, 750=0.08% 00:10:07.180 cpu : usr=1.20%, sys=5.50%, ctx=3985, majf=0, minf=9 00:10:07.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.180 issued rwts: total=1937,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.180 job3: (groupid=0, jobs=1): err= 0: pid=63452: Sun Oct 13 11:13:48 2024 00:10:07.180 read: IOPS=2596, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1001msec) 00:10:07.180 slat (usec): min=11, max=117, avg=13.41, stdev= 3.11 00:10:07.180 clat (usec): min=110, max=525, avg=182.72, stdev=25.45 00:10:07.180 lat (usec): min=155, max=546, avg=196.13, stdev=26.01 00:10:07.180 clat percentiles (usec): 00:10:07.180 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:10:07.180 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:07.180 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 221], 95.00th=[ 235], 00:10:07.180 | 99.00th=[ 262], 99.50th=[ 277], 99.90th=[ 310], 99.95th=[ 330], 00:10:07.180 | 99.99th=[ 529] 00:10:07.180 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:07.180 slat (nsec): min=13919, max=90916, avg=19447.67, stdev=3575.98 00:10:07.180 clat (usec): min=94, max=2031, avg=137.23, stdev=46.15 00:10:07.180 lat (usec): min=116, max=2049, avg=156.68, stdev=46.35 00:10:07.180 clat percentiles (usec): 00:10:07.180 | 1.00th=[ 106], 5.00th=[ 116], 10.00th=[ 121], 20.00th=[ 125], 00:10:07.180 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:10:07.180 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:10:07.180 | 99.00th=[ 176], 99.50th=[ 186], 99.90th=[ 326], 99.95th=[ 1647], 00:10:07.180 | 99.99th=[ 2040] 00:10:07.180 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:07.180 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:07.180 lat (usec) : 100=0.04%, 250=99.01%, 500=0.90%, 750=0.02% 00:10:07.180 lat (msec) : 2=0.02%, 4=0.02% 00:10:07.180 cpu : usr=1.60%, sys=7.70%, ctx=5677, majf=0, minf=12 00:10:07.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.180 issued rwts: total=2599,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.180 00:10:07.180 Run status group 0 (all jobs): 00:10:07.180 READ: bw=36.6MiB/s (38.4MB/s), 7680KiB/s-11.4MiB/s (7865kB/s-12.0MB/s), io=36.6MiB (38.4MB), run=1001-1001msec 00:10:07.180 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:07.180 00:10:07.180 Disk stats (read/write): 00:10:07.180 nvme0n1: ios=2610/2582, merge=0/0, ticks=472/352, in_queue=824, util=87.47% 00:10:07.180 nvme0n2: ios=1583/1907, merge=0/0, ticks=456/397, in_queue=853, util=89.08% 00:10:07.180 nvme0n3: ios=1536/1948, merge=0/0, ticks=414/399, in_queue=813, util=89.26% 00:10:07.180 nvme0n4: ios=2295/2560, merge=0/0, ticks=430/371, in_queue=801, util=89.81% 00:10:07.180 11:13:48 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:07.180 [global] 00:10:07.180 thread=1 00:10:07.180 invalidate=1 00:10:07.180 rw=write 00:10:07.180 time_based=1 00:10:07.180 runtime=1 00:10:07.180 ioengine=libaio 00:10:07.180 direct=1 00:10:07.180 bs=4096 00:10:07.180 iodepth=128 00:10:07.180 norandommap=0 00:10:07.180 numjobs=1 00:10:07.180 00:10:07.180 verify_dump=1 00:10:07.180 verify_backlog=512 00:10:07.180 verify_state_save=0 00:10:07.180 do_verify=1 00:10:07.180 verify=crc32c-intel 00:10:07.180 [job0] 00:10:07.180 filename=/dev/nvme0n1 00:10:07.180 [job1] 00:10:07.180 filename=/dev/nvme0n2 00:10:07.180 [job2] 00:10:07.180 filename=/dev/nvme0n3 00:10:07.180 [job3] 00:10:07.180 filename=/dev/nvme0n4 00:10:07.180 Could not set queue depth (nvme0n1) 00:10:07.180 Could not set queue depth (nvme0n2) 00:10:07.180 Could not set queue depth (nvme0n3) 00:10:07.180 Could not set queue depth (nvme0n4) 00:10:07.180 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:07.180 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:07.180 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:07.180 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:07.180 fio-3.35 00:10:07.180 Starting 4 threads 00:10:08.554 00:10:08.554 job0: (groupid=0, jobs=1): err= 0: pid=63512: Sun Oct 13 11:13:49 2024 00:10:08.554 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:10:08.554 slat (usec): min=10, max=15359, avg=263.67, stdev=1672.07 00:10:08.554 clat (usec): min=15520, max=61899, avg=34189.18, stdev=12022.57 00:10:08.554 lat (usec): min=19944, max=61914, avg=34452.85, stdev=12021.21 00:10:08.555 clat percentiles (usec): 00:10:08.555 | 1.00th=[21103], 5.00th=[22676], 10.00th=[22938], 20.00th=[23462], 00:10:08.555 | 30.00th=[23987], 40.00th=[25822], 50.00th=[32375], 60.00th=[34341], 00:10:08.555 | 70.00th=[36439], 80.00th=[45351], 90.00th=[56361], 95.00th=[60556], 00:10:08.555 | 99.00th=[61604], 99.50th=[61604], 99.90th=[62129], 99.95th=[62129], 00:10:08.555 | 99.99th=[62129] 00:10:08.555 write: IOPS=2354, BW=9419KiB/s (9646kB/s)(9476KiB/1006msec); 0 zone resets 00:10:08.555 slat (usec): min=9, max=13882, avg=187.65, stdev=1066.25 00:10:08.555 clat (usec): min=1744, max=49808, avg=23734.54, stdev=8428.07 00:10:08.555 lat (usec): min=8403, max=49845, avg=23922.19, stdev=8397.73 00:10:08.555 clat percentiles (usec): 00:10:08.555 | 1.00th=[ 8979], 5.00th=[12387], 10.00th=[13698], 20.00th=[17171], 00:10:08.555 | 30.00th=[18220], 40.00th=[20579], 50.00th=[22152], 60.00th=[23200], 00:10:08.555 | 70.00th=[26870], 80.00th=[34866], 90.00th=[35914], 95.00th=[36439], 00:10:08.555 | 99.00th=[49546], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:10:08.555 | 99.99th=[50070] 00:10:08.555 bw ( KiB/s): min= 8192, max= 9736, per=13.35%, avg=8964.00, stdev=1091.77, samples=2 00:10:08.555 iops : min= 2048, max= 2434, avg=2241.00, stdev=272.94, samples=2 00:10:08.555 lat (msec) : 2=0.02%, 10=0.86%, 20=17.46%, 50=75.35%, 100=6.32% 00:10:08.555 cpu : usr=2.29%, sys=6.37%, ctx=139, majf=0, minf=17 00:10:08.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:08.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.555 issued rwts: total=2048,2369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.555 job1: (groupid=0, jobs=1): err= 0: pid=63513: Sun Oct 13 11:13:49 2024 00:10:08.555 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:10:08.555 slat (usec): min=5, max=4941, avg=82.94, stdev=418.10 00:10:08.555 clat (usec): min=6569, max=16925, avg=10893.12, stdev=1106.18 00:10:08.555 lat (usec): min=6599, max=16957, avg=10976.06, stdev=1134.46 00:10:08.555 clat percentiles (usec): 00:10:08.555 | 1.00th=[ 7767], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10290], 00:10:08.555 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:10:08.555 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11994], 95.00th=[12780], 00:10:08.555 | 99.00th=[14615], 99.50th=[15270], 99.90th=[16450], 99.95th=[16450], 00:10:08.555 | 99.99th=[16909] 00:10:08.555 write: IOPS=6033, BW=23.6MiB/s (24.7MB/s)(23.6MiB/1001msec); 0 zone resets 00:10:08.555 slat (usec): min=8, max=4707, avg=81.43, stdev=438.55 00:10:08.555 clat (usec): min=201, max=17412, avg=10814.25, stdev=1289.81 00:10:08.555 lat (usec): min=3918, max=17429, avg=10895.68, stdev=1350.05 00:10:08.555 clat percentiles (usec): 00:10:08.555 | 1.00th=[ 5342], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10290], 00:10:08.555 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:10:08.555 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12125], 95.00th=[12518], 00:10:08.555 | 99.00th=[15008], 99.50th=[15795], 99.90th=[16909], 99.95th=[16909], 00:10:08.555 | 99.99th=[17433] 00:10:08.555 bw ( KiB/s): min=23088, max=24256, per=35.27%, avg=23672.00, stdev=825.90, samples=2 00:10:08.555 iops : min= 5772, max= 6064, avg=5918.00, stdev=206.48, samples=2 00:10:08.555 lat (usec) : 250=0.01% 00:10:08.555 lat (msec) : 4=0.03%, 10=12.07%, 20=87.89% 00:10:08.555 cpu : usr=5.10%, sys=13.90%, ctx=440, majf=0, minf=12 00:10:08.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:08.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.555 issued rwts: total=5632,6040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.555 job2: (groupid=0, jobs=1): err= 0: pid=63514: Sun Oct 13 11:13:49 2024 00:10:08.555 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:10:08.555 slat (usec): min=7, max=19000, avg=174.90, stdev=1205.26 00:10:08.555 clat (usec): min=12916, max=42591, avg=23889.36, stdev=4400.46 00:10:08.555 lat (usec): min=12930, max=46563, avg=24064.26, stdev=4462.06 00:10:08.555 clat percentiles (usec): 00:10:08.555 | 1.00th=[13698], 5.00th=[19268], 10.00th=[19792], 20.00th=[20579], 00:10:08.555 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21890], 60.00th=[23725], 00:10:08.555 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30016], 95.00th=[30278], 00:10:08.555 | 99.00th=[36963], 99.50th=[36963], 99.90th=[39060], 99.95th=[41157], 00:10:08.555 | 99.99th=[42730] 00:10:08.555 write: IOPS=3333, BW=13.0MiB/s (13.7MB/s)(13.1MiB/1006msec); 0 zone resets 00:10:08.555 slat (usec): min=5, max=14433, avg=131.49, stdev=878.10 00:10:08.555 clat (usec): min=1864, max=28527, avg=16120.92, stdev=2628.60 00:10:08.555 lat (usec): min=7547, max=28777, avg=16252.41, stdev=2561.10 00:10:08.555 clat percentiles (usec): 00:10:08.555 | 1.00th=[ 8291], 5.00th=[11731], 10.00th=[13829], 20.00th=[14484], 00:10:08.555 | 30.00th=[15008], 40.00th=[15533], 50.00th=[15795], 60.00th=[16712], 00:10:08.555 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18220], 95.00th=[20841], 00:10:08.555 | 99.00th=[24249], 99.50th=[24511], 99.90th=[24773], 99.95th=[28181], 00:10:08.555 | 99.99th=[28443] 00:10:08.555 bw ( KiB/s): min=12881, max=12944, per=19.24%, avg=12912.50, stdev=44.55, samples=2 00:10:08.555 iops : min= 3220, max= 3236, avg=3228.00, stdev=11.31, samples=2 00:10:08.555 lat (msec) : 2=0.02%, 10=1.28%, 20=53.85%, 50=44.86% 00:10:08.555 cpu : usr=2.79%, sys=8.56%, ctx=168, majf=0, minf=13 00:10:08.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:08.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.555 issued rwts: total=3072,3353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.555 job3: (groupid=0, jobs=1): err= 0: pid=63515: Sun Oct 13 11:13:49 2024 00:10:08.555 read: IOPS=4926, BW=19.2MiB/s (20.2MB/s)(19.3MiB/1002msec) 00:10:08.555 slat (usec): min=4, max=3662, avg=96.47, stdev=436.86 00:10:08.555 clat (usec): min=772, max=16564, avg=12404.61, stdev=1621.49 00:10:08.555 lat (usec): min=2002, max=18678, avg=12501.07, stdev=1630.04 00:10:08.555 clat percentiles (usec): 00:10:08.555 | 1.00th=[ 5735], 5.00th=[10290], 10.00th=[10814], 20.00th=[11207], 00:10:08.555 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12256], 60.00th=[12911], 00:10:08.555 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14222], 95.00th=[14615], 00:10:08.555 | 99.00th=[15139], 99.50th=[15664], 99.90th=[16319], 99.95th=[16319], 00:10:08.555 | 99.99th=[16581] 00:10:08.555 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:08.555 slat (usec): min=9, max=4942, avg=94.60, stdev=425.97 00:10:08.555 clat (usec): min=9616, max=18940, avg=12735.37, stdev=1013.89 00:10:08.555 lat (usec): min=9638, max=18989, avg=12829.97, stdev=1086.49 00:10:08.555 clat percentiles (usec): 00:10:08.555 | 1.00th=[10421], 5.00th=[11338], 10.00th=[11863], 20.00th=[12125], 00:10:08.555 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:10:08.555 | 70.00th=[12911], 80.00th=[13435], 90.00th=[13960], 95.00th=[14877], 00:10:08.555 | 99.00th=[16057], 99.50th=[16319], 99.90th=[16712], 99.95th=[16712], 00:10:08.555 | 99.99th=[19006] 00:10:08.555 bw ( KiB/s): min=20480, max=20521, per=30.54%, avg=20500.50, stdev=28.99, samples=2 00:10:08.555 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:10:08.555 lat (usec) : 1000=0.01% 00:10:08.555 lat (msec) : 4=0.14%, 10=1.33%, 20=98.52% 00:10:08.555 cpu : usr=4.40%, sys=13.99%, ctx=487, majf=0, minf=11 00:10:08.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:08.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.555 issued rwts: total=4936,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.555 00:10:08.555 Run status group 0 (all jobs): 00:10:08.555 READ: bw=60.9MiB/s (63.9MB/s), 8143KiB/s-22.0MiB/s (8339kB/s-23.0MB/s), io=61.3MiB (64.3MB), run=1001-1006msec 00:10:08.555 WRITE: bw=65.6MiB/s (68.7MB/s), 9419KiB/s-23.6MiB/s (9646kB/s-24.7MB/s), io=65.9MiB (69.1MB), run=1001-1006msec 00:10:08.555 00:10:08.555 Disk stats (read/write): 00:10:08.555 nvme0n1: ios=1777/2048, merge=0/0, ticks=14705/10829, in_queue=25534, util=88.35% 00:10:08.555 nvme0n2: ios=4944/5120, merge=0/0, ticks=25468/23949, in_queue=49417, util=88.96% 00:10:08.555 nvme0n3: ios=2560/2956, merge=0/0, ticks=59298/44189, in_queue=103487, util=88.99% 00:10:08.555 nvme0n4: ios=4096/4539, merge=0/0, ticks=16134/16491, in_queue=32625, util=89.87% 00:10:08.555 11:13:49 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:08.555 [global] 00:10:08.555 thread=1 00:10:08.555 invalidate=1 00:10:08.555 rw=randwrite 00:10:08.555 time_based=1 00:10:08.555 runtime=1 00:10:08.555 ioengine=libaio 00:10:08.555 direct=1 00:10:08.555 bs=4096 00:10:08.555 iodepth=128 00:10:08.555 norandommap=0 00:10:08.555 numjobs=1 00:10:08.555 00:10:08.555 verify_dump=1 00:10:08.555 verify_backlog=512 00:10:08.555 verify_state_save=0 00:10:08.555 do_verify=1 00:10:08.555 verify=crc32c-intel 00:10:08.555 [job0] 00:10:08.555 filename=/dev/nvme0n1 00:10:08.555 [job1] 00:10:08.555 filename=/dev/nvme0n2 00:10:08.555 [job2] 00:10:08.555 filename=/dev/nvme0n3 00:10:08.555 [job3] 00:10:08.555 filename=/dev/nvme0n4 00:10:08.555 Could not set queue depth (nvme0n1) 00:10:08.555 Could not set queue depth (nvme0n2) 00:10:08.555 Could not set queue depth (nvme0n3) 00:10:08.555 Could not set queue depth (nvme0n4) 00:10:08.555 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.555 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.555 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.556 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.556 fio-3.35 00:10:08.556 Starting 4 threads 00:10:09.932 00:10:09.932 job0: (groupid=0, jobs=1): err= 0: pid=63568: Sun Oct 13 11:13:51 2024 00:10:09.932 read: IOPS=6873, BW=26.8MiB/s (28.2MB/s)(27.0MiB/1004msec) 00:10:09.932 slat (usec): min=7, max=5710, avg=66.46, stdev=359.56 00:10:09.932 clat (usec): min=1087, max=15678, avg=9296.58, stdev=978.42 00:10:09.932 lat (usec): min=3708, max=15713, avg=9363.05, stdev=1000.38 00:10:09.932 clat percentiles (usec): 00:10:09.932 | 1.00th=[ 6456], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 8979], 00:10:09.932 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9241], 60.00th=[ 9372], 00:10:09.932 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10683], 00:10:09.932 | 99.00th=[11994], 99.50th=[13566], 99.90th=[14222], 99.95th=[14222], 00:10:09.932 | 99.99th=[15664] 00:10:09.932 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:10:09.932 slat (usec): min=8, max=6470, avg=69.15, stdev=398.47 00:10:09.932 clat (usec): min=3942, max=14626, avg=8801.07, stdev=1056.67 00:10:09.932 lat (usec): min=3975, max=14656, avg=8870.23, stdev=1003.90 00:10:09.932 clat percentiles (usec): 00:10:09.932 | 1.00th=[ 5604], 5.00th=[ 6783], 10.00th=[ 7832], 20.00th=[ 8225], 00:10:09.932 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 8979], 00:10:09.932 | 70.00th=[ 9241], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[10159], 00:10:09.932 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12125], 99.95th=[12125], 00:10:09.932 | 99.99th=[14615] 00:10:09.932 bw ( KiB/s): min=28672, max=28672, per=50.58%, avg=28672.00, stdev= 0.00, samples=2 00:10:09.932 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:10:09.932 lat (msec) : 2=0.01%, 4=0.11%, 10=91.61%, 20=8.27% 00:10:09.932 cpu : usr=5.28%, sys=16.25%, ctx=353, majf=0, minf=11 00:10:09.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:09.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.932 issued rwts: total=6901,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.932 job1: (groupid=0, jobs=1): err= 0: pid=63569: Sun Oct 13 11:13:51 2024 00:10:09.932 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:10:09.932 slat (usec): min=4, max=11662, avg=145.51, stdev=736.04 00:10:09.932 clat (usec): min=9554, max=73564, avg=18853.58, stdev=11944.74 00:10:09.932 lat (usec): min=10196, max=77736, avg=18999.09, stdev=12037.37 00:10:09.932 clat percentiles (usec): 00:10:09.932 | 1.00th=[10552], 5.00th=[11600], 10.00th=[12649], 20.00th=[12911], 00:10:09.932 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13960], 60.00th=[14615], 00:10:09.932 | 70.00th=[17433], 80.00th=[19530], 90.00th=[37487], 95.00th=[47973], 00:10:09.932 | 99.00th=[70779], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:10:09.932 | 99.99th=[73925] 00:10:09.932 write: IOPS=3777, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1003msec); 0 zone resets 00:10:09.932 slat (usec): min=6, max=18093, avg=118.49, stdev=710.28 00:10:09.932 clat (usec): min=2671, max=57719, avg=15717.53, stdev=9120.05 00:10:09.932 lat (usec): min=2688, max=57750, avg=15836.02, stdev=9188.70 00:10:09.932 clat percentiles (usec): 00:10:09.932 | 1.00th=[ 3425], 5.00th=[ 8586], 10.00th=[ 9765], 20.00th=[10159], 00:10:09.932 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[12649], 00:10:09.932 | 70.00th=[16319], 80.00th=[21365], 90.00th=[29492], 95.00th=[39060], 00:10:09.932 | 99.00th=[47449], 99.50th=[47973], 99.90th=[47973], 99.95th=[48497], 00:10:09.932 | 99.99th=[57934] 00:10:09.932 bw ( KiB/s): min=12312, max=17008, per=25.86%, avg=14660.00, stdev=3320.57, samples=2 00:10:09.932 iops : min= 3078, max= 4252, avg=3665.00, stdev=830.14, samples=2 00:10:09.932 lat (msec) : 4=0.57%, 10=6.85%, 20=70.88%, 50=19.45%, 100=2.25% 00:10:09.932 cpu : usr=3.19%, sys=10.18%, ctx=315, majf=0, minf=7 00:10:09.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:09.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.932 issued rwts: total=3584,3789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.932 job2: (groupid=0, jobs=1): err= 0: pid=63570: Sun Oct 13 11:13:51 2024 00:10:09.932 read: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec) 00:10:09.932 slat (usec): min=6, max=28650, avg=252.71, stdev=1644.71 00:10:09.932 clat (usec): min=13443, max=79179, avg=32545.56, stdev=14707.13 00:10:09.932 lat (usec): min=13457, max=79217, avg=32798.27, stdev=14840.08 00:10:09.932 clat percentiles (usec): 00:10:09.932 | 1.00th=[15926], 5.00th=[19268], 10.00th=[20055], 20.00th=[20055], 00:10:09.932 | 30.00th=[20579], 40.00th=[22152], 50.00th=[27657], 60.00th=[34341], 00:10:09.932 | 70.00th=[39060], 80.00th=[40633], 90.00th=[57934], 95.00th=[62653], 00:10:09.932 | 99.00th=[74974], 99.50th=[78119], 99.90th=[79168], 99.95th=[79168], 00:10:09.932 | 99.99th=[79168] 00:10:09.932 write: IOPS=1869, BW=7478KiB/s (7658kB/s)(7568KiB/1012msec); 0 zone resets 00:10:09.932 slat (usec): min=5, max=15524, avg=316.73, stdev=1398.36 00:10:09.932 clat (msec): min=6, max=108, avg=41.35, stdev=31.08 00:10:09.932 lat (msec): min=10, max=108, avg=41.67, stdev=31.29 00:10:09.932 clat percentiles (msec): 00:10:09.932 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:10:09.932 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 21], 60.00th=[ 42], 00:10:09.932 | 70.00th=[ 57], 80.00th=[ 80], 90.00th=[ 94], 95.00th=[ 95], 00:10:09.932 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 109], 00:10:09.932 | 99.99th=[ 109] 00:10:09.932 bw ( KiB/s): min= 4936, max= 9194, per=12.46%, avg=7065.00, stdev=3010.86, samples=2 00:10:09.932 iops : min= 1234, max= 2298, avg=1766.00, stdev=752.36, samples=2 00:10:09.932 lat (msec) : 10=0.23%, 20=32.56%, 50=42.42%, 100=24.59%, 250=0.20% 00:10:09.932 cpu : usr=2.08%, sys=4.85%, ctx=301, majf=0, minf=5 00:10:09.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:10:09.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.932 issued rwts: total=1536,1892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.932 job3: (groupid=0, jobs=1): err= 0: pid=63572: Sun Oct 13 11:13:51 2024 00:10:09.932 read: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec) 00:10:09.932 slat (usec): min=6, max=27138, avg=367.86, stdev=1831.16 00:10:09.932 clat (usec): min=19028, max=83207, avg=45293.72, stdev=15982.55 00:10:09.932 lat (usec): min=19562, max=85592, avg=45661.58, stdev=16126.32 00:10:09.932 clat percentiles (usec): 00:10:09.932 | 1.00th=[20317], 5.00th=[21627], 10.00th=[27395], 20.00th=[32113], 00:10:09.932 | 30.00th=[35390], 40.00th=[38536], 50.00th=[40109], 60.00th=[43254], 00:10:09.932 | 70.00th=[55313], 80.00th=[61604], 90.00th=[71828], 95.00th=[74974], 00:10:09.932 | 99.00th=[81265], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:10:09.932 | 99.99th=[83362] 00:10:09.932 write: IOPS=1479, BW=5919KiB/s (6061kB/s)(5972KiB/1009msec); 0 zone resets 00:10:09.932 slat (usec): min=5, max=29945, avg=409.91, stdev=1766.53 00:10:09.932 clat (msec): min=8, max=120, avg=52.49, stdev=27.69 00:10:09.932 lat (msec): min=9, max=120, avg=52.90, stdev=27.87 00:10:09.932 clat percentiles (msec): 00:10:09.932 | 1.00th=[ 15], 5.00th=[ 20], 10.00th=[ 24], 20.00th=[ 27], 00:10:09.932 | 30.00th=[ 30], 40.00th=[ 39], 50.00th=[ 42], 60.00th=[ 53], 00:10:09.932 | 70.00th=[ 66], 80.00th=[ 93], 90.00th=[ 95], 95.00th=[ 96], 00:10:09.932 | 99.00th=[ 100], 99.50th=[ 102], 99.90th=[ 110], 99.95th=[ 121], 00:10:09.932 | 99.99th=[ 121] 00:10:09.932 bw ( KiB/s): min= 5061, max= 5856, per=9.63%, avg=5458.50, stdev=562.15, samples=2 00:10:09.932 iops : min= 1265, max= 1464, avg=1364.50, stdev=140.71, samples=2 00:10:09.932 lat (msec) : 10=0.20%, 20=4.81%, 50=56.93%, 100=37.66%, 250=0.40% 00:10:09.932 cpu : usr=1.49%, sys=3.87%, ctx=359, majf=0, minf=19 00:10:09.933 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:10:09.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.933 issued rwts: total=1024,1493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.933 00:10:09.933 Run status group 0 (all jobs): 00:10:09.933 READ: bw=50.4MiB/s (52.8MB/s), 4059KiB/s-26.8MiB/s (4157kB/s-28.2MB/s), io=51.0MiB (53.4MB), run=1003-1012msec 00:10:09.933 WRITE: bw=55.4MiB/s (58.0MB/s), 5919KiB/s-27.9MiB/s (6061kB/s-29.2MB/s), io=56.0MiB (58.7MB), run=1003-1012msec 00:10:09.933 00:10:09.933 Disk stats (read/write): 00:10:09.933 nvme0n1: ios=5926/6144, merge=0/0, ticks=52084/49573, in_queue=101657, util=87.96% 00:10:09.933 nvme0n2: ios=3120/3175, merge=0/0, ticks=29640/21553, in_queue=51193, util=88.87% 00:10:09.933 nvme0n3: ios=1498/1536, merge=0/0, ticks=34041/36868, in_queue=70909, util=89.59% 00:10:09.933 nvme0n4: ios=1024/1055, merge=0/0, ticks=23769/28553, in_queue=52322, util=87.03% 00:10:09.933 11:13:51 -- target/fio.sh@55 -- # sync 00:10:09.933 11:13:51 -- target/fio.sh@59 -- # fio_pid=63591 00:10:09.933 11:13:51 -- target/fio.sh@61 -- # sleep 3 00:10:09.933 11:13:51 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:09.933 [global] 00:10:09.933 thread=1 00:10:09.933 invalidate=1 00:10:09.933 rw=read 00:10:09.933 time_based=1 00:10:09.933 runtime=10 00:10:09.933 ioengine=libaio 00:10:09.933 direct=1 00:10:09.933 bs=4096 00:10:09.933 iodepth=1 00:10:09.933 norandommap=1 00:10:09.933 numjobs=1 00:10:09.933 00:10:09.933 [job0] 00:10:09.933 filename=/dev/nvme0n1 00:10:09.933 [job1] 00:10:09.933 filename=/dev/nvme0n2 00:10:09.933 [job2] 00:10:09.933 filename=/dev/nvme0n3 00:10:09.933 [job3] 00:10:09.933 filename=/dev/nvme0n4 00:10:09.933 Could not set queue depth (nvme0n1) 00:10:09.933 Could not set queue depth (nvme0n2) 00:10:09.933 Could not set queue depth (nvme0n3) 00:10:09.933 Could not set queue depth (nvme0n4) 00:10:09.933 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.933 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.933 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.933 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.933 fio-3.35 00:10:09.933 Starting 4 threads 00:10:13.215 11:13:54 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:13.215 fio: pid=63634, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.215 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=47804416, buflen=4096 00:10:13.215 11:13:54 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:13.215 fio: pid=63633, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.215 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=69754880, buflen=4096 00:10:13.215 11:13:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.215 11:13:54 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:13.473 fio: pid=63631, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.473 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10997760, buflen=4096 00:10:13.473 11:13:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.473 11:13:54 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:13.731 fio: pid=63632, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.731 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=64679936, buflen=4096 00:10:13.731 11:13:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.731 11:13:55 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:13.731 00:10:13.731 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63631: Sun Oct 13 11:13:55 2024 00:10:13.731 read: IOPS=5532, BW=21.6MiB/s (22.7MB/s)(74.5MiB/3447msec) 00:10:13.731 slat (usec): min=7, max=11721, avg=15.20, stdev=141.08 00:10:13.731 clat (usec): min=57, max=1731, avg=164.42, stdev=26.11 00:10:13.731 lat (usec): min=135, max=11891, avg=179.62, stdev=144.92 00:10:13.731 clat percentiles (usec): 00:10:13.731 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:10:13.732 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:10:13.732 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 200], 00:10:13.732 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 306], 99.95th=[ 494], 00:10:13.732 | 99.99th=[ 1139] 00:10:13.732 bw ( KiB/s): min=21516, max=23040, per=32.86%, avg=22546.00, stdev=612.28, samples=6 00:10:13.732 iops : min= 5379, max= 5760, avg=5636.50, stdev=153.07, samples=6 00:10:13.732 lat (usec) : 100=0.01%, 250=99.42%, 500=0.52%, 750=0.03%, 1000=0.01% 00:10:13.732 lat (msec) : 2=0.01% 00:10:13.732 cpu : usr=1.74%, sys=5.89%, ctx=19077, majf=0, minf=1 00:10:13.732 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.732 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.732 issued rwts: total=19070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.732 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.732 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63632: Sun Oct 13 11:13:55 2024 00:10:13.732 read: IOPS=4262, BW=16.6MiB/s (17.5MB/s)(61.7MiB/3705msec) 00:10:13.732 slat (usec): min=7, max=13553, avg=14.59, stdev=177.21 00:10:13.732 clat (usec): min=121, max=14810, avg=218.82, stdev=134.30 00:10:13.732 lat (usec): min=133, max=14844, avg=233.41, stdev=221.98 00:10:13.732 clat percentiles (usec): 00:10:13.732 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 149], 20.00th=[ 159], 00:10:13.732 | 30.00th=[ 172], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 245], 00:10:13.732 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:10:13.732 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 457], 99.95th=[ 971], 00:10:13.732 | 99.99th=[ 3359] 00:10:13.732 bw ( KiB/s): min=14944, max=21853, per=24.42%, avg=16756.29, stdev=2675.69, samples=7 00:10:13.732 iops : min= 3736, max= 5463, avg=4189.00, stdev=668.81, samples=7 00:10:13.732 lat (usec) : 250=70.23%, 500=29.67%, 750=0.04%, 1000=0.01% 00:10:13.732 lat (msec) : 2=0.01%, 4=0.03%, 20=0.01% 00:10:13.732 cpu : usr=1.13%, sys=4.59%, ctx=15800, majf=0, minf=2 00:10:13.732 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.732 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.732 issued rwts: total=15792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.732 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.732 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63633: Sun Oct 13 11:13:55 2024 00:10:13.732 read: IOPS=5313, BW=20.8MiB/s (21.8MB/s)(66.5MiB/3205msec) 00:10:13.732 slat (usec): min=10, max=10752, avg=13.86, stdev=109.65 00:10:13.732 clat (usec): min=132, max=2456, avg=173.20, stdev=36.12 00:10:13.732 lat (usec): min=150, max=11037, avg=187.06, stdev=116.47 00:10:13.732 clat percentiles (usec): 00:10:13.732 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:10:13.732 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:10:13.732 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 215], 00:10:13.732 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 277], 99.95th=[ 363], 00:10:13.732 | 99.99th=[ 2376] 00:10:13.732 bw ( KiB/s): min=21320, max=21928, per=31.63%, avg=21708.67, stdev=223.72, samples=6 00:10:13.732 iops : min= 5330, max= 5482, avg=5427.17, stdev=55.93, samples=6 00:10:13.732 lat (usec) : 250=99.67%, 500=0.29%, 750=0.01% 00:10:13.732 lat (msec) : 2=0.02%, 4=0.01% 00:10:13.732 cpu : usr=1.31%, sys=6.09%, ctx=17035, majf=0, minf=1 00:10:13.732 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.732 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.732 issued rwts: total=17031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.732 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.732 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63634: Sun Oct 13 11:13:55 2024 00:10:13.732 read: IOPS=3941, BW=15.4MiB/s (16.1MB/s)(45.6MiB/2961msec) 00:10:13.732 slat (usec): min=7, max=107, avg=11.30, stdev= 3.37 00:10:13.732 clat (usec): min=138, max=7937, avg=241.18, stdev=86.41 00:10:13.732 lat (usec): min=152, max=7951, avg=252.48, stdev=86.00 00:10:13.732 clat percentiles (usec): 00:10:13.732 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 180], 20.00th=[ 231], 00:10:13.732 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 251], 00:10:13.732 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 273], 00:10:13.732 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 486], 99.95th=[ 1139], 00:10:13.732 | 99.99th=[ 2671] 00:10:13.732 bw ( KiB/s): min=15280, max=18099, per=23.16%, avg=15890.20, stdev=1235.22, samples=5 00:10:13.732 iops : min= 3820, max= 4524, avg=3972.40, stdev=308.47, samples=5 00:10:13.732 lat (usec) : 250=60.02%, 500=39.89%, 750=0.03% 00:10:13.732 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01% 00:10:13.732 cpu : usr=1.15%, sys=4.16%, ctx=11679, majf=0, minf=1 00:10:13.732 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.732 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.732 issued rwts: total=11672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.732 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.732 00:10:13.732 Run status group 0 (all jobs): 00:10:13.732 READ: bw=67.0MiB/s (70.3MB/s), 15.4MiB/s-21.6MiB/s (16.1MB/s-22.7MB/s), io=248MiB (260MB), run=2961-3705msec 00:10:13.732 00:10:13.732 Disk stats (read/write): 00:10:13.732 nvme0n1: ios=18597/0, merge=0/0, ticks=3116/0, in_queue=3116, util=95.39% 00:10:13.732 nvme0n2: ios=15284/0, merge=0/0, ticks=3315/0, in_queue=3315, util=95.61% 00:10:13.732 nvme0n3: ios=16681/0, merge=0/0, ticks=2886/0, in_queue=2886, util=96.24% 00:10:13.732 nvme0n4: ios=11327/0, merge=0/0, ticks=2636/0, in_queue=2636, util=96.52% 00:10:13.990 11:13:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.990 11:13:55 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:14.248 11:13:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.248 11:13:55 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:14.507 11:13:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.507 11:13:55 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:14.765 11:13:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.766 11:13:56 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:15.024 11:13:56 -- target/fio.sh@69 -- # fio_status=0 00:10:15.024 11:13:56 -- target/fio.sh@70 -- # wait 63591 00:10:15.024 11:13:56 -- target/fio.sh@70 -- # fio_status=4 00:10:15.024 11:13:56 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.024 11:13:56 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:15.024 11:13:56 -- common/autotest_common.sh@1198 -- # local i=0 00:10:15.024 11:13:56 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:10:15.024 11:13:56 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.024 11:13:56 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:15.024 11:13:56 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.024 11:13:56 -- common/autotest_common.sh@1210 -- # return 0 00:10:15.024 nvmf hotplug test: fio failed as expected 00:10:15.024 11:13:56 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:15.024 11:13:56 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:15.024 11:13:56 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:15.283 11:13:56 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:15.283 11:13:56 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:15.283 11:13:56 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:15.283 11:13:56 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:15.283 11:13:56 -- target/fio.sh@91 -- # nvmftestfini 00:10:15.283 11:13:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:15.283 11:13:56 -- nvmf/common.sh@116 -- # sync 00:10:15.283 11:13:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:15.283 11:13:56 -- nvmf/common.sh@119 -- # set +e 00:10:15.283 11:13:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:15.283 11:13:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:15.283 rmmod nvme_tcp 00:10:15.283 rmmod nvme_fabrics 00:10:15.283 rmmod nvme_keyring 00:10:15.283 11:13:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:15.283 11:13:56 -- nvmf/common.sh@123 -- # set -e 00:10:15.283 11:13:56 -- nvmf/common.sh@124 -- # return 0 00:10:15.283 11:13:56 -- nvmf/common.sh@477 -- # '[' -n 63202 ']' 00:10:15.283 11:13:56 -- nvmf/common.sh@478 -- # killprocess 63202 00:10:15.283 11:13:56 -- common/autotest_common.sh@926 -- # '[' -z 63202 ']' 00:10:15.283 11:13:56 -- common/autotest_common.sh@930 -- # kill -0 63202 00:10:15.283 11:13:56 -- common/autotest_common.sh@931 -- # uname 00:10:15.283 11:13:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:15.283 11:13:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63202 00:10:15.542 killing process with pid 63202 00:10:15.542 11:13:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:15.542 11:13:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:15.542 11:13:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63202' 00:10:15.542 11:13:56 -- common/autotest_common.sh@945 -- # kill 63202 00:10:15.542 11:13:56 -- common/autotest_common.sh@950 -- # wait 63202 00:10:15.542 11:13:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:15.542 11:13:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:15.542 11:13:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:15.542 11:13:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:15.542 11:13:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:15.542 11:13:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.542 11:13:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:15.542 11:13:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.542 11:13:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:15.542 ************************************ 00:10:15.542 END TEST nvmf_fio_target 00:10:15.542 ************************************ 00:10:15.542 00:10:15.542 real 0m19.225s 00:10:15.542 user 1m12.306s 00:10:15.542 sys 0m10.351s 00:10:15.542 11:13:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.542 11:13:57 -- common/autotest_common.sh@10 -- # set +x 00:10:15.801 11:13:57 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:15.801 11:13:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:15.801 11:13:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:15.801 11:13:57 -- common/autotest_common.sh@10 -- # set +x 00:10:15.801 ************************************ 00:10:15.801 START TEST nvmf_bdevio 00:10:15.801 ************************************ 00:10:15.801 11:13:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:15.801 * Looking for test storage... 00:10:15.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:15.801 11:13:57 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:15.801 11:13:57 -- nvmf/common.sh@7 -- # uname -s 00:10:15.801 11:13:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.801 11:13:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.801 11:13:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.801 11:13:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.801 11:13:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.801 11:13:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.801 11:13:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.801 11:13:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.801 11:13:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.801 11:13:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.801 11:13:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:10:15.801 11:13:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:10:15.801 11:13:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.801 11:13:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.801 11:13:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:15.801 11:13:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:15.801 11:13:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.801 11:13:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.801 11:13:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.801 11:13:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.801 11:13:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.801 11:13:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.801 11:13:57 -- paths/export.sh@5 -- # export PATH 00:10:15.801 11:13:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.801 11:13:57 -- nvmf/common.sh@46 -- # : 0 00:10:15.801 11:13:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:15.801 11:13:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:15.801 11:13:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:15.801 11:13:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.801 11:13:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.801 11:13:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:15.801 11:13:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:15.802 11:13:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:15.802 11:13:57 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:15.802 11:13:57 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:15.802 11:13:57 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:15.802 11:13:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:15.802 11:13:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.802 11:13:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:15.802 11:13:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:15.802 11:13:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:15.802 11:13:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.802 11:13:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:15.802 11:13:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.802 11:13:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:15.802 11:13:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:15.802 11:13:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:15.802 11:13:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:15.802 11:13:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:15.802 11:13:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:15.802 11:13:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.802 11:13:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.802 11:13:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:15.802 11:13:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:15.802 11:13:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:15.802 11:13:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:15.802 11:13:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:15.802 11:13:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.802 11:13:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:15.802 11:13:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:15.802 11:13:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:15.802 11:13:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:15.802 11:13:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:15.802 11:13:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:15.802 Cannot find device "nvmf_tgt_br" 00:10:15.802 11:13:57 -- nvmf/common.sh@154 -- # true 00:10:15.802 11:13:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:15.802 Cannot find device "nvmf_tgt_br2" 00:10:15.802 11:13:57 -- nvmf/common.sh@155 -- # true 00:10:15.802 11:13:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:15.802 11:13:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:15.802 Cannot find device "nvmf_tgt_br" 00:10:15.802 11:13:57 -- nvmf/common.sh@157 -- # true 00:10:15.802 11:13:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:15.802 Cannot find device "nvmf_tgt_br2" 00:10:15.802 11:13:57 -- nvmf/common.sh@158 -- # true 00:10:15.802 11:13:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:15.802 11:13:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:16.061 11:13:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.061 11:13:57 -- nvmf/common.sh@161 -- # true 00:10:16.061 11:13:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.061 11:13:57 -- nvmf/common.sh@162 -- # true 00:10:16.061 11:13:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:16.061 11:13:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:16.061 11:13:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:16.061 11:13:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:16.061 11:13:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:16.061 11:13:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:16.061 11:13:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:16.061 11:13:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:16.061 11:13:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:16.061 11:13:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:16.061 11:13:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:16.061 11:13:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:16.061 11:13:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:16.061 11:13:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:16.061 11:13:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:16.061 11:13:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:16.061 11:13:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:16.061 11:13:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:16.061 11:13:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:16.061 11:13:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:16.061 11:13:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:16.061 11:13:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:16.061 11:13:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:16.061 11:13:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:16.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:10:16.061 00:10:16.061 --- 10.0.0.2 ping statistics --- 00:10:16.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.061 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:16.061 11:13:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:16.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:16.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:10:16.061 00:10:16.061 --- 10.0.0.3 ping statistics --- 00:10:16.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.061 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:16.061 11:13:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:16.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:10:16.061 00:10:16.061 --- 10.0.0.1 ping statistics --- 00:10:16.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.061 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:10:16.061 11:13:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.061 11:13:57 -- nvmf/common.sh@421 -- # return 0 00:10:16.061 11:13:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:16.061 11:13:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.061 11:13:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:16.061 11:13:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:16.061 11:13:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.061 11:13:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:16.061 11:13:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:16.061 11:13:57 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:16.061 11:13:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:16.061 11:13:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:16.061 11:13:57 -- common/autotest_common.sh@10 -- # set +x 00:10:16.061 11:13:57 -- nvmf/common.sh@469 -- # nvmfpid=63896 00:10:16.061 11:13:57 -- nvmf/common.sh@470 -- # waitforlisten 63896 00:10:16.061 11:13:57 -- common/autotest_common.sh@819 -- # '[' -z 63896 ']' 00:10:16.061 11:13:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:16.061 11:13:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.061 11:13:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:16.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.061 11:13:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.061 11:13:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:16.061 11:13:57 -- common/autotest_common.sh@10 -- # set +x 00:10:16.320 [2024-10-13 11:13:57.685122] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:16.320 [2024-10-13 11:13:57.685211] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.320 [2024-10-13 11:13:57.824883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.320 [2024-10-13 11:13:57.879097] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:16.320 [2024-10-13 11:13:57.879752] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.320 [2024-10-13 11:13:57.879960] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.320 [2024-10-13 11:13:57.880507] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.320 [2024-10-13 11:13:57.880987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.320 [2024-10-13 11:13:57.881118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:16.320 [2024-10-13 11:13:57.881267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.320 [2024-10-13 11:13:57.881267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:17.261 11:13:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:17.261 11:13:58 -- common/autotest_common.sh@852 -- # return 0 00:10:17.261 11:13:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:17.261 11:13:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:17.261 11:13:58 -- common/autotest_common.sh@10 -- # set +x 00:10:17.261 11:13:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.261 11:13:58 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.261 11:13:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.261 11:13:58 -- common/autotest_common.sh@10 -- # set +x 00:10:17.261 [2024-10-13 11:13:58.757437] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.261 11:13:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.261 11:13:58 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:17.261 11:13:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.261 11:13:58 -- common/autotest_common.sh@10 -- # set +x 00:10:17.261 Malloc0 00:10:17.261 11:13:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.261 11:13:58 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:17.261 11:13:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.261 11:13:58 -- common/autotest_common.sh@10 -- # set +x 00:10:17.261 11:13:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.261 11:13:58 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.261 11:13:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.261 11:13:58 -- common/autotest_common.sh@10 -- # set +x 00:10:17.261 11:13:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.261 11:13:58 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.261 11:13:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.261 11:13:58 -- common/autotest_common.sh@10 -- # set +x 00:10:17.261 [2024-10-13 11:13:58.813629] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.261 11:13:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.261 11:13:58 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:17.261 11:13:58 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:17.261 11:13:58 -- nvmf/common.sh@520 -- # config=() 00:10:17.261 11:13:58 -- nvmf/common.sh@520 -- # local subsystem config 00:10:17.261 11:13:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:17.261 11:13:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:17.261 { 00:10:17.261 "params": { 00:10:17.261 "name": "Nvme$subsystem", 00:10:17.261 "trtype": "$TEST_TRANSPORT", 00:10:17.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:17.261 "adrfam": "ipv4", 00:10:17.261 "trsvcid": "$NVMF_PORT", 00:10:17.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:17.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:17.261 "hdgst": ${hdgst:-false}, 00:10:17.261 "ddgst": ${ddgst:-false} 00:10:17.261 }, 00:10:17.261 "method": "bdev_nvme_attach_controller" 00:10:17.261 } 00:10:17.261 EOF 00:10:17.261 )") 00:10:17.261 11:13:58 -- nvmf/common.sh@542 -- # cat 00:10:17.261 11:13:58 -- nvmf/common.sh@544 -- # jq . 00:10:17.261 11:13:58 -- nvmf/common.sh@545 -- # IFS=, 00:10:17.261 11:13:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:17.261 "params": { 00:10:17.261 "name": "Nvme1", 00:10:17.261 "trtype": "tcp", 00:10:17.261 "traddr": "10.0.0.2", 00:10:17.261 "adrfam": "ipv4", 00:10:17.261 "trsvcid": "4420", 00:10:17.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:17.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:17.261 "hdgst": false, 00:10:17.261 "ddgst": false 00:10:17.261 }, 00:10:17.261 "method": "bdev_nvme_attach_controller" 00:10:17.261 }' 00:10:17.520 [2024-10-13 11:13:58.872232] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:17.520 [2024-10-13 11:13:58.872355] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63932 ] 00:10:17.520 [2024-10-13 11:13:59.011910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:17.520 [2024-10-13 11:13:59.082850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.520 [2024-10-13 11:13:59.082953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.520 [2024-10-13 11:13:59.082944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.779 [2024-10-13 11:13:59.220142] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:17.779 [2024-10-13 11:13:59.220558] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:17.779 I/O targets: 00:10:17.779 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:17.779 00:10:17.779 00:10:17.779 CUnit - A unit testing framework for C - Version 2.1-3 00:10:17.779 http://cunit.sourceforge.net/ 00:10:17.779 00:10:17.779 00:10:17.779 Suite: bdevio tests on: Nvme1n1 00:10:17.779 Test: blockdev write read block ...passed 00:10:17.779 Test: blockdev write zeroes read block ...passed 00:10:17.779 Test: blockdev write zeroes read no split ...passed 00:10:17.779 Test: blockdev write zeroes read split ...passed 00:10:17.779 Test: blockdev write zeroes read split partial ...passed 00:10:17.779 Test: blockdev reset ...[2024-10-13 11:13:59.253628] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:17.779 [2024-10-13 11:13:59.253997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a83c80 (9): Bad file descriptor 00:10:17.779 [2024-10-13 11:13:59.269919] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:17.779 passed 00:10:17.779 Test: blockdev write read 8 blocks ...passed 00:10:17.779 Test: blockdev write read size > 128k ...passed 00:10:17.779 Test: blockdev write read invalid size ...passed 00:10:17.779 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.779 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.779 Test: blockdev write read max offset ...passed 00:10:17.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.779 Test: blockdev writev readv 8 blocks ...passed 00:10:17.779 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.779 Test: blockdev writev readv block ...passed 00:10:17.779 Test: blockdev writev readv size > 128k ...passed 00:10:17.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.779 Test: blockdev comparev and writev ...[2024-10-13 11:13:59.278475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.779 [2024-10-13 11:13:59.278540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:17.779 [2024-10-13 11:13:59.278577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.779 [2024-10-13 11:13:59.278590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:17.779 [2024-10-13 11:13:59.278928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.779 [2024-10-13 11:13:59.278950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:17.779 [2024-10-13 11:13:59.278970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.779 [2024-10-13 11:13:59.278982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:17.779 [2024-10-13 11:13:59.279282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.779 [2024-10-13 11:13:59.279308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:17.779 [2024-10-13 11:13:59.279342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.779 [2024-10-13 11:13:59.279367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:17.779 [2024-10-13 11:13:59.279752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.779 [2024-10-13 11:13:59.279790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:17.779 [2024-10-13 11:13:59.279812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.779 [2024-10-13 11:13:59.279824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:17.779 passed 00:10:17.779 Test: blockdev nvme passthru rw ...passed 00:10:17.779 Test: blockdev nvme passthru vendor specific ...[2024-10-13 11:13:59.280846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.779 [2024-10-13 11:13:59.280882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:17.779 [2024-10-13 11:13:59.281015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.779 [2024-10-13 11:13:59.281035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:17.779 [2024-10-13 11:13:59.281165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.779 [2024-10-13 11:13:59.281191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:17.779 [2024-10-13 11:13:59.281309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.779 [2024-10-13 11:13:59.281363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:17.779 passed 00:10:17.779 Test: blockdev nvme admin passthru ...passed 00:10:17.779 Test: blockdev copy ...passed 00:10:17.779 00:10:17.779 Run Summary: Type Total Ran Passed Failed Inactive 00:10:17.779 suites 1 1 n/a 0 0 00:10:17.779 tests 23 23 23 0 0 00:10:17.779 asserts 152 152 152 0 n/a 00:10:17.779 00:10:17.779 Elapsed time = 0.148 seconds 00:10:18.038 11:13:59 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.038 11:13:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.038 11:13:59 -- common/autotest_common.sh@10 -- # set +x 00:10:18.038 11:13:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.038 11:13:59 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:18.038 11:13:59 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:18.038 11:13:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:18.038 11:13:59 -- nvmf/common.sh@116 -- # sync 00:10:18.038 11:13:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:18.038 11:13:59 -- nvmf/common.sh@119 -- # set +e 00:10:18.038 11:13:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:18.038 11:13:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:18.038 rmmod nvme_tcp 00:10:18.038 rmmod nvme_fabrics 00:10:18.038 rmmod nvme_keyring 00:10:18.038 11:13:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:18.038 11:13:59 -- nvmf/common.sh@123 -- # set -e 00:10:18.038 11:13:59 -- nvmf/common.sh@124 -- # return 0 00:10:18.038 11:13:59 -- nvmf/common.sh@477 -- # '[' -n 63896 ']' 00:10:18.038 11:13:59 -- nvmf/common.sh@478 -- # killprocess 63896 00:10:18.038 11:13:59 -- common/autotest_common.sh@926 -- # '[' -z 63896 ']' 00:10:18.038 11:13:59 -- common/autotest_common.sh@930 -- # kill -0 63896 00:10:18.038 11:13:59 -- common/autotest_common.sh@931 -- # uname 00:10:18.038 11:13:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:18.038 11:13:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63896 00:10:18.038 11:13:59 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:10:18.038 11:13:59 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:10:18.038 killing process with pid 63896 00:10:18.038 11:13:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63896' 00:10:18.038 11:13:59 -- common/autotest_common.sh@945 -- # kill 63896 00:10:18.038 11:13:59 -- common/autotest_common.sh@950 -- # wait 63896 00:10:18.297 11:13:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:18.297 11:13:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:18.297 11:13:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:18.297 11:13:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:18.297 11:13:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:18.297 11:13:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.297 11:13:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.297 11:13:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.297 11:13:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:18.297 00:10:18.297 real 0m2.661s 00:10:18.297 user 0m8.800s 00:10:18.297 sys 0m0.644s 00:10:18.297 ************************************ 00:10:18.297 END TEST nvmf_bdevio 00:10:18.297 ************************************ 00:10:18.297 11:13:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.297 11:13:59 -- common/autotest_common.sh@10 -- # set +x 00:10:18.297 11:13:59 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:10:18.297 11:13:59 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:18.297 11:13:59 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:18.297 11:13:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:18.297 11:13:59 -- common/autotest_common.sh@10 -- # set +x 00:10:18.297 ************************************ 00:10:18.297 START TEST nvmf_bdevio_no_huge 00:10:18.297 ************************************ 00:10:18.297 11:13:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:18.556 * Looking for test storage... 00:10:18.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:18.556 11:13:59 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.556 11:13:59 -- nvmf/common.sh@7 -- # uname -s 00:10:18.556 11:13:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.556 11:13:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.556 11:13:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.556 11:13:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.556 11:13:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.556 11:13:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.556 11:13:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.556 11:13:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.556 11:13:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.556 11:13:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.556 11:13:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:10:18.556 11:13:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:10:18.556 11:13:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.556 11:13:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.556 11:13:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:18.556 11:13:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.556 11:13:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.556 11:13:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.556 11:13:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.556 11:13:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.556 11:13:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.556 11:13:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.556 11:13:59 -- paths/export.sh@5 -- # export PATH 00:10:18.557 11:13:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.557 11:13:59 -- nvmf/common.sh@46 -- # : 0 00:10:18.557 11:13:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:18.557 11:13:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:18.557 11:13:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:18.557 11:13:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.557 11:13:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.557 11:13:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:18.557 11:13:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:18.557 11:13:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:18.557 11:13:59 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:18.557 11:13:59 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:18.557 11:13:59 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:18.557 11:13:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:18.557 11:13:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.557 11:13:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:18.557 11:13:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:18.557 11:13:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:18.557 11:13:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.557 11:13:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.557 11:13:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.557 11:13:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:18.557 11:13:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:18.557 11:13:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:18.557 11:13:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:18.557 11:13:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:18.557 11:13:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:18.557 11:13:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.557 11:13:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.557 11:13:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:18.557 11:13:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:18.557 11:13:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:18.557 11:13:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:18.557 11:13:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:18.557 11:13:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.557 11:13:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:18.557 11:13:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:18.557 11:13:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:18.557 11:13:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:18.557 11:13:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:18.557 11:14:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:18.557 Cannot find device "nvmf_tgt_br" 00:10:18.557 11:14:00 -- nvmf/common.sh@154 -- # true 00:10:18.557 11:14:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:18.557 Cannot find device "nvmf_tgt_br2" 00:10:18.557 11:14:00 -- nvmf/common.sh@155 -- # true 00:10:18.557 11:14:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:18.557 11:14:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:18.557 Cannot find device "nvmf_tgt_br" 00:10:18.557 11:14:00 -- nvmf/common.sh@157 -- # true 00:10:18.557 11:14:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:18.557 Cannot find device "nvmf_tgt_br2" 00:10:18.557 11:14:00 -- nvmf/common.sh@158 -- # true 00:10:18.557 11:14:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:18.557 11:14:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:18.557 11:14:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.557 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.557 11:14:00 -- nvmf/common.sh@161 -- # true 00:10:18.557 11:14:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.557 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.557 11:14:00 -- nvmf/common.sh@162 -- # true 00:10:18.557 11:14:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:18.816 11:14:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:18.816 11:14:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:18.816 11:14:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:18.816 11:14:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.816 11:14:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.816 11:14:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.816 11:14:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:18.816 11:14:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:18.816 11:14:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:18.816 11:14:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:18.816 11:14:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:18.816 11:14:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:18.816 11:14:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.816 11:14:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.816 11:14:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.816 11:14:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:18.816 11:14:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:18.816 11:14:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.816 11:14:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:18.816 11:14:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:18.816 11:14:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:18.816 11:14:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:18.816 11:14:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:18.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:10:18.816 00:10:18.816 --- 10.0.0.2 ping statistics --- 00:10:18.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.816 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:18.816 11:14:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:18.816 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:18.816 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:10:18.816 00:10:18.816 --- 10.0.0.3 ping statistics --- 00:10:18.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.816 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:18.816 11:14:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:18.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:18.816 00:10:18.816 --- 10.0.0.1 ping statistics --- 00:10:18.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.816 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:18.816 11:14:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.816 11:14:00 -- nvmf/common.sh@421 -- # return 0 00:10:18.816 11:14:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:18.816 11:14:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.816 11:14:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:18.816 11:14:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:18.816 11:14:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.816 11:14:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:18.816 11:14:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:18.816 11:14:00 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:18.816 11:14:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:18.816 11:14:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:18.816 11:14:00 -- common/autotest_common.sh@10 -- # set +x 00:10:18.816 11:14:00 -- nvmf/common.sh@469 -- # nvmfpid=64106 00:10:18.816 11:14:00 -- nvmf/common.sh@470 -- # waitforlisten 64106 00:10:18.816 11:14:00 -- common/autotest_common.sh@819 -- # '[' -z 64106 ']' 00:10:18.816 11:14:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:10:18.816 11:14:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.816 11:14:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:18.816 11:14:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.816 11:14:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:18.816 11:14:00 -- common/autotest_common.sh@10 -- # set +x 00:10:19.075 [2024-10-13 11:14:00.453446] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:19.075 [2024-10-13 11:14:00.453729] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:10:19.075 [2024-10-13 11:14:00.602658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.355 [2024-10-13 11:14:00.735908] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:19.355 [2024-10-13 11:14:00.736598] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.355 [2024-10-13 11:14:00.736829] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.355 [2024-10-13 11:14:00.737561] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.355 [2024-10-13 11:14:00.737918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:19.355 [2024-10-13 11:14:00.738214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:19.355 [2024-10-13 11:14:00.738060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:19.355 [2024-10-13 11:14:00.738222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.939 11:14:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:19.939 11:14:01 -- common/autotest_common.sh@852 -- # return 0 00:10:19.939 11:14:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:19.939 11:14:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:19.939 11:14:01 -- common/autotest_common.sh@10 -- # set +x 00:10:19.939 11:14:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.939 11:14:01 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:19.939 11:14:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:19.939 11:14:01 -- common/autotest_common.sh@10 -- # set +x 00:10:19.939 [2024-10-13 11:14:01.521663] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.939 11:14:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:19.939 11:14:01 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:19.939 11:14:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:19.939 11:14:01 -- common/autotest_common.sh@10 -- # set +x 00:10:20.197 Malloc0 00:10:20.197 11:14:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:20.197 11:14:01 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:20.197 11:14:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.197 11:14:01 -- common/autotest_common.sh@10 -- # set +x 00:10:20.197 11:14:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:20.197 11:14:01 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.197 11:14:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.197 11:14:01 -- common/autotest_common.sh@10 -- # set +x 00:10:20.197 11:14:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:20.197 11:14:01 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.197 11:14:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.197 11:14:01 -- common/autotest_common.sh@10 -- # set +x 00:10:20.197 [2024-10-13 11:14:01.570024] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.197 11:14:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:20.197 11:14:01 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:10:20.197 11:14:01 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:20.197 11:14:01 -- nvmf/common.sh@520 -- # config=() 00:10:20.197 11:14:01 -- nvmf/common.sh@520 -- # local subsystem config 00:10:20.197 11:14:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:20.197 11:14:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:20.197 { 00:10:20.197 "params": { 00:10:20.197 "name": "Nvme$subsystem", 00:10:20.197 "trtype": "$TEST_TRANSPORT", 00:10:20.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:20.197 "adrfam": "ipv4", 00:10:20.197 "trsvcid": "$NVMF_PORT", 00:10:20.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:20.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:20.198 "hdgst": ${hdgst:-false}, 00:10:20.198 "ddgst": ${ddgst:-false} 00:10:20.198 }, 00:10:20.198 "method": "bdev_nvme_attach_controller" 00:10:20.198 } 00:10:20.198 EOF 00:10:20.198 )") 00:10:20.198 11:14:01 -- nvmf/common.sh@542 -- # cat 00:10:20.198 11:14:01 -- nvmf/common.sh@544 -- # jq . 00:10:20.198 11:14:01 -- nvmf/common.sh@545 -- # IFS=, 00:10:20.198 11:14:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:20.198 "params": { 00:10:20.198 "name": "Nvme1", 00:10:20.198 "trtype": "tcp", 00:10:20.198 "traddr": "10.0.0.2", 00:10:20.198 "adrfam": "ipv4", 00:10:20.198 "trsvcid": "4420", 00:10:20.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:20.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:20.198 "hdgst": false, 00:10:20.198 "ddgst": false 00:10:20.198 }, 00:10:20.198 "method": "bdev_nvme_attach_controller" 00:10:20.198 }' 00:10:20.198 [2024-10-13 11:14:01.626272] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:20.198 [2024-10-13 11:14:01.626403] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid64146 ] 00:10:20.198 [2024-10-13 11:14:01.773715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.457 [2024-10-13 11:14:01.907473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.457 [2024-10-13 11:14:01.907593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.457 [2024-10-13 11:14:01.907601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.715 [2024-10-13 11:14:02.076499] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:20.715 [2024-10-13 11:14:02.076561] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:20.715 I/O targets: 00:10:20.715 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:20.715 00:10:20.715 00:10:20.715 CUnit - A unit testing framework for C - Version 2.1-3 00:10:20.715 http://cunit.sourceforge.net/ 00:10:20.715 00:10:20.715 00:10:20.715 Suite: bdevio tests on: Nvme1n1 00:10:20.715 Test: blockdev write read block ...passed 00:10:20.716 Test: blockdev write zeroes read block ...passed 00:10:20.716 Test: blockdev write zeroes read no split ...passed 00:10:20.716 Test: blockdev write zeroes read split ...passed 00:10:20.716 Test: blockdev write zeroes read split partial ...passed 00:10:20.716 Test: blockdev reset ...[2024-10-13 11:14:02.117465] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:20.716 [2024-10-13 11:14:02.117751] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a75680 (9): Bad file descriptor 00:10:20.716 passed 00:10:20.716 Test: blockdev write read 8 blocks ...[2024-10-13 11:14:02.135678] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:20.716 passed 00:10:20.716 Test: blockdev write read size > 128k ...passed 00:10:20.716 Test: blockdev write read invalid size ...passed 00:10:20.716 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.716 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.716 Test: blockdev write read max offset ...passed 00:10:20.716 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.716 Test: blockdev writev readv 8 blocks ...passed 00:10:20.716 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.716 Test: blockdev writev readv block ...passed 00:10:20.716 Test: blockdev writev readv size > 128k ...passed 00:10:20.716 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.716 Test: blockdev comparev and writev ...[2024-10-13 11:14:02.144257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.716 [2024-10-13 11:14:02.144351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:20.716 [2024-10-13 11:14:02.144393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.716 [2024-10-13 11:14:02.144407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:20.716 [2024-10-13 11:14:02.144732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.716 [2024-10-13 11:14:02.144753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:20.716 [2024-10-13 11:14:02.144774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.716 [2024-10-13 11:14:02.144786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:20.716 [2024-10-13 11:14:02.145068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.716 [2024-10-13 11:14:02.145088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:20.716 [2024-10-13 11:14:02.145108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.716 [2024-10-13 11:14:02.145121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:20.716 [2024-10-13 11:14:02.145413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.716 [2024-10-13 11:14:02.145440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:20.716 [2024-10-13 11:14:02.145461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.716 [2024-10-13 11:14:02.145473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:20.716 passed 00:10:20.716 Test: blockdev nvme passthru rw ...passed 00:10:20.716 Test: blockdev nvme passthru vendor specific ...[2024-10-13 11:14:02.146287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.716 [2024-10-13 11:14:02.146337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:20.716 [2024-10-13 11:14:02.146467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.716 [2024-10-13 11:14:02.146486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:20.716 [2024-10-13 11:14:02.146595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOpassed 00:10:20.716 Test: blockdev nvme admin passthru ...CK OFFSET 0x0 len:0x0 00:10:20.716 [2024-10-13 11:14:02.146777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:20.716 [2024-10-13 11:14:02.146929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.716 [2024-10-13 11:14:02.146950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:20.716 passed 00:10:20.716 Test: blockdev copy ...passed 00:10:20.716 00:10:20.716 Run Summary: Type Total Ran Passed Failed Inactive 00:10:20.716 suites 1 1 n/a 0 0 00:10:20.716 tests 23 23 23 0 0 00:10:20.716 asserts 152 152 152 0 n/a 00:10:20.716 00:10:20.716 Elapsed time = 0.165 seconds 00:10:20.975 11:14:02 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.975 11:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.975 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:10:20.975 11:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:20.975 11:14:02 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:20.975 11:14:02 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:20.975 11:14:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:20.975 11:14:02 -- nvmf/common.sh@116 -- # sync 00:10:20.975 11:14:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:20.975 11:14:02 -- nvmf/common.sh@119 -- # set +e 00:10:20.975 11:14:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:20.975 11:14:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:20.975 rmmod nvme_tcp 00:10:20.975 rmmod nvme_fabrics 00:10:21.234 rmmod nvme_keyring 00:10:21.234 11:14:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:21.234 11:14:02 -- nvmf/common.sh@123 -- # set -e 00:10:21.234 11:14:02 -- nvmf/common.sh@124 -- # return 0 00:10:21.234 11:14:02 -- nvmf/common.sh@477 -- # '[' -n 64106 ']' 00:10:21.234 11:14:02 -- nvmf/common.sh@478 -- # killprocess 64106 00:10:21.234 11:14:02 -- common/autotest_common.sh@926 -- # '[' -z 64106 ']' 00:10:21.234 11:14:02 -- common/autotest_common.sh@930 -- # kill -0 64106 00:10:21.234 11:14:02 -- common/autotest_common.sh@931 -- # uname 00:10:21.234 11:14:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:21.234 11:14:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64106 00:10:21.234 killing process with pid 64106 00:10:21.234 11:14:02 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:10:21.234 11:14:02 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:10:21.234 11:14:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64106' 00:10:21.234 11:14:02 -- common/autotest_common.sh@945 -- # kill 64106 00:10:21.234 11:14:02 -- common/autotest_common.sh@950 -- # wait 64106 00:10:21.492 11:14:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:21.492 11:14:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:21.492 11:14:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:21.492 11:14:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:21.492 11:14:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:21.492 11:14:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.492 11:14:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.492 11:14:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.492 11:14:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:21.492 00:10:21.493 real 0m3.148s 00:10:21.493 user 0m10.220s 00:10:21.493 sys 0m1.136s 00:10:21.493 11:14:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.493 ************************************ 00:10:21.493 END TEST nvmf_bdevio_no_huge 00:10:21.493 ************************************ 00:10:21.493 11:14:03 -- common/autotest_common.sh@10 -- # set +x 00:10:21.493 11:14:03 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:21.493 11:14:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:21.493 11:14:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:21.493 11:14:03 -- common/autotest_common.sh@10 -- # set +x 00:10:21.493 ************************************ 00:10:21.493 START TEST nvmf_tls 00:10:21.493 ************************************ 00:10:21.493 11:14:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:21.751 * Looking for test storage... 00:10:21.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:21.751 11:14:03 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:21.751 11:14:03 -- nvmf/common.sh@7 -- # uname -s 00:10:21.751 11:14:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.751 11:14:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.751 11:14:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.751 11:14:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.751 11:14:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.751 11:14:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.751 11:14:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.751 11:14:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.751 11:14:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.751 11:14:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.751 11:14:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:10:21.752 11:14:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:10:21.752 11:14:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.752 11:14:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.752 11:14:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:21.752 11:14:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:21.752 11:14:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.752 11:14:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.752 11:14:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.752 11:14:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.752 11:14:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.752 11:14:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.752 11:14:03 -- paths/export.sh@5 -- # export PATH 00:10:21.752 11:14:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.752 11:14:03 -- nvmf/common.sh@46 -- # : 0 00:10:21.752 11:14:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:21.752 11:14:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:21.752 11:14:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:21.752 11:14:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.752 11:14:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.752 11:14:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:21.752 11:14:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:21.752 11:14:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:21.752 11:14:03 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.752 11:14:03 -- target/tls.sh@71 -- # nvmftestinit 00:10:21.752 11:14:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:21.752 11:14:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.752 11:14:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:21.752 11:14:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:21.752 11:14:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:21.752 11:14:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.752 11:14:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.752 11:14:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.752 11:14:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:21.752 11:14:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:21.752 11:14:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:21.752 11:14:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:21.752 11:14:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:21.752 11:14:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:21.752 11:14:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.752 11:14:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.752 11:14:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:21.752 11:14:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:21.752 11:14:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:21.752 11:14:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:21.752 11:14:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:21.752 11:14:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.752 11:14:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:21.752 11:14:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:21.752 11:14:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:21.752 11:14:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:21.752 11:14:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:21.752 11:14:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:21.752 Cannot find device "nvmf_tgt_br" 00:10:21.752 11:14:03 -- nvmf/common.sh@154 -- # true 00:10:21.752 11:14:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:21.752 Cannot find device "nvmf_tgt_br2" 00:10:21.752 11:14:03 -- nvmf/common.sh@155 -- # true 00:10:21.752 11:14:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:21.752 11:14:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:21.752 Cannot find device "nvmf_tgt_br" 00:10:21.752 11:14:03 -- nvmf/common.sh@157 -- # true 00:10:21.752 11:14:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:21.752 Cannot find device "nvmf_tgt_br2" 00:10:21.752 11:14:03 -- nvmf/common.sh@158 -- # true 00:10:21.752 11:14:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:21.752 11:14:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:21.752 11:14:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:21.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.752 11:14:03 -- nvmf/common.sh@161 -- # true 00:10:21.752 11:14:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:21.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.752 11:14:03 -- nvmf/common.sh@162 -- # true 00:10:21.752 11:14:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:21.752 11:14:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:21.752 11:14:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:21.752 11:14:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:22.011 11:14:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:22.011 11:14:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:22.011 11:14:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:22.011 11:14:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:22.011 11:14:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:22.011 11:14:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:22.011 11:14:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:22.011 11:14:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:22.011 11:14:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:22.011 11:14:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:22.011 11:14:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:22.011 11:14:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:22.011 11:14:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:22.011 11:14:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:22.011 11:14:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:22.011 11:14:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:22.011 11:14:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:22.011 11:14:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:22.011 11:14:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:22.011 11:14:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:22.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:10:22.011 00:10:22.011 --- 10.0.0.2 ping statistics --- 00:10:22.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.011 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:22.011 11:14:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:22.011 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:22.011 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:10:22.011 00:10:22.011 --- 10.0.0.3 ping statistics --- 00:10:22.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.011 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:22.011 11:14:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:22.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:22.011 00:10:22.011 --- 10.0.0.1 ping statistics --- 00:10:22.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.011 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:22.011 11:14:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.011 11:14:03 -- nvmf/common.sh@421 -- # return 0 00:10:22.011 11:14:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:22.011 11:14:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.011 11:14:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:22.011 11:14:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:22.011 11:14:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.011 11:14:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:22.011 11:14:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:22.011 11:14:03 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:10:22.011 11:14:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:22.011 11:14:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:22.011 11:14:03 -- common/autotest_common.sh@10 -- # set +x 00:10:22.011 11:14:03 -- nvmf/common.sh@469 -- # nvmfpid=64320 00:10:22.011 11:14:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:10:22.011 11:14:03 -- nvmf/common.sh@470 -- # waitforlisten 64320 00:10:22.011 11:14:03 -- common/autotest_common.sh@819 -- # '[' -z 64320 ']' 00:10:22.011 11:14:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.011 11:14:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:22.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.011 11:14:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.011 11:14:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:22.011 11:14:03 -- common/autotest_common.sh@10 -- # set +x 00:10:22.270 [2024-10-13 11:14:03.612978] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:22.270 [2024-10-13 11:14:03.613083] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.270 [2024-10-13 11:14:03.751513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.270 [2024-10-13 11:14:03.819851] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:22.270 [2024-10-13 11:14:03.820012] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.270 [2024-10-13 11:14:03.820027] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.270 [2024-10-13 11:14:03.820038] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.270 [2024-10-13 11:14:03.820073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.270 11:14:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:22.270 11:14:03 -- common/autotest_common.sh@852 -- # return 0 00:10:22.270 11:14:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:22.270 11:14:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:22.270 11:14:03 -- common/autotest_common.sh@10 -- # set +x 00:10:22.528 11:14:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.528 11:14:03 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:10:22.528 11:14:03 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:10:22.528 true 00:10:22.528 11:14:04 -- target/tls.sh@82 -- # jq -r .tls_version 00:10:22.528 11:14:04 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:22.787 11:14:04 -- target/tls.sh@82 -- # version=0 00:10:22.787 11:14:04 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:10:22.787 11:14:04 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:23.046 11:14:04 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:23.046 11:14:04 -- target/tls.sh@90 -- # jq -r .tls_version 00:10:23.305 11:14:04 -- target/tls.sh@90 -- # version=13 00:10:23.305 11:14:04 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:10:23.305 11:14:04 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:10:23.565 11:14:05 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:23.565 11:14:05 -- target/tls.sh@98 -- # jq -r .tls_version 00:10:23.823 11:14:05 -- target/tls.sh@98 -- # version=7 00:10:23.823 11:14:05 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:10:23.823 11:14:05 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:23.823 11:14:05 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:10:24.081 11:14:05 -- target/tls.sh@105 -- # ktls=false 00:10:24.081 11:14:05 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:10:24.081 11:14:05 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:10:24.340 11:14:05 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:24.340 11:14:05 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:10:24.598 11:14:06 -- target/tls.sh@113 -- # ktls=true 00:10:24.598 11:14:06 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:10:24.598 11:14:06 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:10:24.857 11:14:06 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:10:24.857 11:14:06 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:25.116 11:14:06 -- target/tls.sh@121 -- # ktls=false 00:10:25.116 11:14:06 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:10:25.116 11:14:06 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:10:25.116 11:14:06 -- target/tls.sh@49 -- # local key hash crc 00:10:25.116 11:14:06 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:10:25.116 11:14:06 -- target/tls.sh@51 -- # hash=01 00:10:25.116 11:14:06 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:10:25.116 11:14:06 -- target/tls.sh@52 -- # tail -c8 00:10:25.116 11:14:06 -- target/tls.sh@52 -- # gzip -1 -c 00:10:25.116 11:14:06 -- target/tls.sh@52 -- # head -c 4 00:10:25.116 11:14:06 -- target/tls.sh@52 -- # crc='p$H�' 00:10:25.116 11:14:06 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:10:25.116 11:14:06 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:25.116 11:14:06 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:25.116 11:14:06 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:25.116 11:14:06 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:10:25.116 11:14:06 -- target/tls.sh@49 -- # local key hash crc 00:10:25.116 11:14:06 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:10:25.116 11:14:06 -- target/tls.sh@51 -- # hash=01 00:10:25.116 11:14:06 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:10:25.116 11:14:06 -- target/tls.sh@52 -- # gzip -1 -c 00:10:25.116 11:14:06 -- target/tls.sh@52 -- # head -c 4 00:10:25.116 11:14:06 -- target/tls.sh@52 -- # tail -c8 00:10:25.116 11:14:06 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:10:25.116 11:14:06 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:25.116 11:14:06 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:10:25.116 11:14:06 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:25.116 11:14:06 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:25.116 11:14:06 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:25.116 11:14:06 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:25.116 11:14:06 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:25.116 11:14:06 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:25.116 11:14:06 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:25.116 11:14:06 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:25.116 11:14:06 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:25.375 11:14:06 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:10:25.634 11:14:07 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:25.634 11:14:07 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:25.634 11:14:07 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:25.893 [2024-10-13 11:14:07.343597] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.893 11:14:07 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:26.151 11:14:07 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:26.410 [2024-10-13 11:14:07.771671] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:26.410 [2024-10-13 11:14:07.771883] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.410 11:14:07 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:26.410 malloc0 00:10:26.410 11:14:08 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:26.669 11:14:08 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:26.927 11:14:08 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:39.148 Initializing NVMe Controllers 00:10:39.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:39.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:39.148 Initialization complete. Launching workers. 00:10:39.148 ======================================================== 00:10:39.148 Latency(us) 00:10:39.148 Device Information : IOPS MiB/s Average min max 00:10:39.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11480.45 44.85 5575.78 980.31 8093.82 00:10:39.148 ======================================================== 00:10:39.148 Total : 11480.45 44.85 5575.78 980.31 8093.82 00:10:39.148 00:10:39.148 11:14:18 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:39.148 11:14:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:39.148 11:14:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:39.148 11:14:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:39.148 11:14:18 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:39.148 11:14:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:39.148 11:14:18 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:39.148 11:14:18 -- target/tls.sh@28 -- # bdevperf_pid=64555 00:10:39.148 11:14:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:39.148 11:14:18 -- target/tls.sh@31 -- # waitforlisten 64555 /var/tmp/bdevperf.sock 00:10:39.148 11:14:18 -- common/autotest_common.sh@819 -- # '[' -z 64555 ']' 00:10:39.148 11:14:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:39.148 11:14:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:39.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:39.148 11:14:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:39.148 11:14:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:39.148 11:14:18 -- common/autotest_common.sh@10 -- # set +x 00:10:39.148 [2024-10-13 11:14:18.761009] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:39.148 [2024-10-13 11:14:18.761092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64555 ] 00:10:39.148 [2024-10-13 11:14:18.897500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.148 [2024-10-13 11:14:18.965434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.148 11:14:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:39.148 11:14:19 -- common/autotest_common.sh@852 -- # return 0 00:10:39.148 11:14:19 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:39.148 [2024-10-13 11:14:19.963722] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:39.148 TLSTESTn1 00:10:39.148 11:14:20 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:10:39.148 Running I/O for 10 seconds... 00:10:49.129 00:10:49.129 Latency(us) 00:10:49.129 [2024-10-13T11:14:30.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.129 [2024-10-13T11:14:30.731Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:10:49.129 Verification LBA range: start 0x0 length 0x2000 00:10:49.129 TLSTESTn1 : 10.01 6395.52 24.98 0.00 0.00 19982.62 4379.00 25261.15 00:10:49.129 [2024-10-13T11:14:30.731Z] =================================================================================================================== 00:10:49.129 [2024-10-13T11:14:30.731Z] Total : 6395.52 24.98 0.00 0.00 19982.62 4379.00 25261.15 00:10:49.129 0 00:10:49.129 11:14:30 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:49.129 11:14:30 -- target/tls.sh@45 -- # killprocess 64555 00:10:49.129 11:14:30 -- common/autotest_common.sh@926 -- # '[' -z 64555 ']' 00:10:49.129 11:14:30 -- common/autotest_common.sh@930 -- # kill -0 64555 00:10:49.129 11:14:30 -- common/autotest_common.sh@931 -- # uname 00:10:49.129 11:14:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:49.129 11:14:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64555 00:10:49.129 11:14:30 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:49.129 11:14:30 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:49.129 killing process with pid 64555 00:10:49.129 11:14:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64555' 00:10:49.129 Received shutdown signal, test time was about 10.000000 seconds 00:10:49.129 00:10:49.129 Latency(us) 00:10:49.129 [2024-10-13T11:14:30.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.129 [2024-10-13T11:14:30.731Z] =================================================================================================================== 00:10:49.129 [2024-10-13T11:14:30.731Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:49.129 11:14:30 -- common/autotest_common.sh@945 -- # kill 64555 00:10:49.129 11:14:30 -- common/autotest_common.sh@950 -- # wait 64555 00:10:49.129 11:14:30 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:49.129 11:14:30 -- common/autotest_common.sh@640 -- # local es=0 00:10:49.129 11:14:30 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:49.129 11:14:30 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:10:49.129 11:14:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:49.129 11:14:30 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:10:49.129 11:14:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:49.129 11:14:30 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:49.129 11:14:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:49.129 11:14:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:49.129 11:14:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:49.129 11:14:30 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:10:49.129 11:14:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:49.129 11:14:30 -- target/tls.sh@28 -- # bdevperf_pid=64688 00:10:49.129 11:14:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:49.129 11:14:30 -- target/tls.sh@31 -- # waitforlisten 64688 /var/tmp/bdevperf.sock 00:10:49.129 11:14:30 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:49.129 11:14:30 -- common/autotest_common.sh@819 -- # '[' -z 64688 ']' 00:10:49.129 11:14:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:49.129 11:14:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:49.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:49.129 11:14:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:49.129 11:14:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:49.129 11:14:30 -- common/autotest_common.sh@10 -- # set +x 00:10:49.129 [2024-10-13 11:14:30.485195] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:49.129 [2024-10-13 11:14:30.485298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64688 ] 00:10:49.129 [2024-10-13 11:14:30.618401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.129 [2024-10-13 11:14:30.673031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.067 11:14:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:50.067 11:14:31 -- common/autotest_common.sh@852 -- # return 0 00:10:50.067 11:14:31 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:50.327 [2024-10-13 11:14:31.724560] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:50.327 [2024-10-13 11:14:31.735713] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:50.327 [2024-10-13 11:14:31.735790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa48650 (107): Transport endpoint is not connected 00:10:50.327 [2024-10-13 11:14:31.736765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa48650 (9): Bad file descriptor 00:10:50.327 [2024-10-13 11:14:31.737761] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:50.327 [2024-10-13 11:14:31.737795] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:50.327 [2024-10-13 11:14:31.737804] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:50.327 request: 00:10:50.327 { 00:10:50.327 "name": "TLSTEST", 00:10:50.327 "trtype": "tcp", 00:10:50.327 "traddr": "10.0.0.2", 00:10:50.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:50.327 "adrfam": "ipv4", 00:10:50.327 "trsvcid": "4420", 00:10:50.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:50.327 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:10:50.327 "method": "bdev_nvme_attach_controller", 00:10:50.327 "req_id": 1 00:10:50.327 } 00:10:50.327 Got JSON-RPC error response 00:10:50.327 response: 00:10:50.327 { 00:10:50.327 "code": -32602, 00:10:50.327 "message": "Invalid parameters" 00:10:50.327 } 00:10:50.327 11:14:31 -- target/tls.sh@36 -- # killprocess 64688 00:10:50.327 11:14:31 -- common/autotest_common.sh@926 -- # '[' -z 64688 ']' 00:10:50.327 11:14:31 -- common/autotest_common.sh@930 -- # kill -0 64688 00:10:50.327 11:14:31 -- common/autotest_common.sh@931 -- # uname 00:10:50.327 11:14:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:50.327 11:14:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64688 00:10:50.327 11:14:31 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:50.327 11:14:31 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:50.327 killing process with pid 64688 00:10:50.327 11:14:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64688' 00:10:50.327 Received shutdown signal, test time was about 10.000000 seconds 00:10:50.327 00:10:50.327 Latency(us) 00:10:50.327 [2024-10-13T11:14:31.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.327 [2024-10-13T11:14:31.929Z] =================================================================================================================== 00:10:50.327 [2024-10-13T11:14:31.929Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:50.327 11:14:31 -- common/autotest_common.sh@945 -- # kill 64688 00:10:50.327 11:14:31 -- common/autotest_common.sh@950 -- # wait 64688 00:10:50.587 11:14:31 -- target/tls.sh@37 -- # return 1 00:10:50.587 11:14:31 -- common/autotest_common.sh@643 -- # es=1 00:10:50.587 11:14:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:50.587 11:14:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:50.587 11:14:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:50.587 11:14:31 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:50.587 11:14:31 -- common/autotest_common.sh@640 -- # local es=0 00:10:50.587 11:14:31 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:50.587 11:14:31 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:10:50.587 11:14:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:50.587 11:14:31 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:10:50.587 11:14:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:50.587 11:14:31 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:50.587 11:14:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:50.587 11:14:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:50.587 11:14:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:10:50.587 11:14:31 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:50.587 11:14:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:50.587 11:14:31 -- target/tls.sh@28 -- # bdevperf_pid=64716 00:10:50.587 11:14:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:50.587 11:14:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:50.587 11:14:31 -- target/tls.sh@31 -- # waitforlisten 64716 /var/tmp/bdevperf.sock 00:10:50.587 11:14:31 -- common/autotest_common.sh@819 -- # '[' -z 64716 ']' 00:10:50.587 11:14:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:50.587 11:14:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:50.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:50.587 11:14:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:50.587 11:14:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:50.587 11:14:31 -- common/autotest_common.sh@10 -- # set +x 00:10:50.587 [2024-10-13 11:14:32.025815] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:50.587 [2024-10-13 11:14:32.025926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64716 ] 00:10:50.587 [2024-10-13 11:14:32.161267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.846 [2024-10-13 11:14:32.213933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.415 11:14:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:51.415 11:14:32 -- common/autotest_common.sh@852 -- # return 0 00:10:51.415 11:14:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:51.675 [2024-10-13 11:14:33.224606] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:51.675 [2024-10-13 11:14:33.235756] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:10:51.675 [2024-10-13 11:14:33.235803] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:10:51.675 [2024-10-13 11:14:33.235850] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:51.675 [2024-10-13 11:14:33.235961] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2243650 (107): Transport endpoint is not connected 00:10:51.675 [2024-10-13 11:14:33.236952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2243650 (9): Bad file descriptor 00:10:51.675 [2024-10-13 11:14:33.237948] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:51.675 [2024-10-13 11:14:33.237967] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:51.675 [2024-10-13 11:14:33.237991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:51.675 request: 00:10:51.675 { 00:10:51.675 "name": "TLSTEST", 00:10:51.675 "trtype": "tcp", 00:10:51.675 "traddr": "10.0.0.2", 00:10:51.675 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:10:51.675 "adrfam": "ipv4", 00:10:51.675 "trsvcid": "4420", 00:10:51.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.675 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:10:51.675 "method": "bdev_nvme_attach_controller", 00:10:51.675 "req_id": 1 00:10:51.675 } 00:10:51.675 Got JSON-RPC error response 00:10:51.675 response: 00:10:51.675 { 00:10:51.675 "code": -32602, 00:10:51.675 "message": "Invalid parameters" 00:10:51.675 } 00:10:51.675 11:14:33 -- target/tls.sh@36 -- # killprocess 64716 00:10:51.675 11:14:33 -- common/autotest_common.sh@926 -- # '[' -z 64716 ']' 00:10:51.675 11:14:33 -- common/autotest_common.sh@930 -- # kill -0 64716 00:10:51.675 11:14:33 -- common/autotest_common.sh@931 -- # uname 00:10:51.675 11:14:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:51.675 11:14:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64716 00:10:51.935 11:14:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:51.935 11:14:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:51.935 killing process with pid 64716 00:10:51.935 11:14:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64716' 00:10:51.935 Received shutdown signal, test time was about 10.000000 seconds 00:10:51.935 00:10:51.935 Latency(us) 00:10:51.935 [2024-10-13T11:14:33.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.935 [2024-10-13T11:14:33.537Z] =================================================================================================================== 00:10:51.935 [2024-10-13T11:14:33.537Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:51.935 11:14:33 -- common/autotest_common.sh@945 -- # kill 64716 00:10:51.935 11:14:33 -- common/autotest_common.sh@950 -- # wait 64716 00:10:51.935 11:14:33 -- target/tls.sh@37 -- # return 1 00:10:51.935 11:14:33 -- common/autotest_common.sh@643 -- # es=1 00:10:51.935 11:14:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:51.935 11:14:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:51.935 11:14:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:51.936 11:14:33 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:51.936 11:14:33 -- common/autotest_common.sh@640 -- # local es=0 00:10:51.936 11:14:33 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:51.936 11:14:33 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:10:51.936 11:14:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:51.936 11:14:33 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:10:51.936 11:14:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:51.936 11:14:33 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:51.936 11:14:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:51.936 11:14:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:10:51.936 11:14:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:51.936 11:14:33 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:51.936 11:14:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:51.936 11:14:33 -- target/tls.sh@28 -- # bdevperf_pid=64743 00:10:51.936 11:14:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:51.936 11:14:33 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:51.936 11:14:33 -- target/tls.sh@31 -- # waitforlisten 64743 /var/tmp/bdevperf.sock 00:10:51.936 11:14:33 -- common/autotest_common.sh@819 -- # '[' -z 64743 ']' 00:10:51.936 11:14:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:51.936 11:14:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:51.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:51.936 11:14:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:51.936 11:14:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:51.936 11:14:33 -- common/autotest_common.sh@10 -- # set +x 00:10:51.936 [2024-10-13 11:14:33.527664] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:51.936 [2024-10-13 11:14:33.527771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64743 ] 00:10:52.196 [2024-10-13 11:14:33.667240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.196 [2024-10-13 11:14:33.719058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.132 11:14:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:53.132 11:14:34 -- common/autotest_common.sh@852 -- # return 0 00:10:53.132 11:14:34 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:53.392 [2024-10-13 11:14:34.736464] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:53.392 [2024-10-13 11:14:34.746033] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:10:53.392 [2024-10-13 11:14:34.746082] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:10:53.392 [2024-10-13 11:14:34.746128] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:53.392 [2024-10-13 11:14:34.746808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a73650 (107): Transport endpoint is not connected 00:10:53.392 [2024-10-13 11:14:34.747798] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a73650 (9): Bad file descriptor 00:10:53.392 [2024-10-13 11:14:34.748793] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:10:53.392 [2024-10-13 11:14:34.748812] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:53.392 [2024-10-13 11:14:34.748836] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:10:53.392 request: 00:10:53.392 { 00:10:53.392 "name": "TLSTEST", 00:10:53.392 "trtype": "tcp", 00:10:53.392 "traddr": "10.0.0.2", 00:10:53.392 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.392 "adrfam": "ipv4", 00:10:53.392 "trsvcid": "4420", 00:10:53.392 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:10:53.392 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:10:53.392 "method": "bdev_nvme_attach_controller", 00:10:53.392 "req_id": 1 00:10:53.392 } 00:10:53.392 Got JSON-RPC error response 00:10:53.392 response: 00:10:53.392 { 00:10:53.392 "code": -32602, 00:10:53.392 "message": "Invalid parameters" 00:10:53.392 } 00:10:53.392 11:14:34 -- target/tls.sh@36 -- # killprocess 64743 00:10:53.392 11:14:34 -- common/autotest_common.sh@926 -- # '[' -z 64743 ']' 00:10:53.393 11:14:34 -- common/autotest_common.sh@930 -- # kill -0 64743 00:10:53.393 11:14:34 -- common/autotest_common.sh@931 -- # uname 00:10:53.393 11:14:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:53.393 11:14:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64743 00:10:53.393 11:14:34 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:53.393 11:14:34 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:53.393 killing process with pid 64743 00:10:53.393 11:14:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64743' 00:10:53.393 Received shutdown signal, test time was about 10.000000 seconds 00:10:53.393 00:10:53.393 Latency(us) 00:10:53.393 [2024-10-13T11:14:34.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.393 [2024-10-13T11:14:34.995Z] =================================================================================================================== 00:10:53.393 [2024-10-13T11:14:34.995Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:53.393 11:14:34 -- common/autotest_common.sh@945 -- # kill 64743 00:10:53.393 11:14:34 -- common/autotest_common.sh@950 -- # wait 64743 00:10:53.393 11:14:34 -- target/tls.sh@37 -- # return 1 00:10:53.393 11:14:34 -- common/autotest_common.sh@643 -- # es=1 00:10:53.393 11:14:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:53.393 11:14:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:53.393 11:14:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:53.393 11:14:34 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:53.393 11:14:34 -- common/autotest_common.sh@640 -- # local es=0 00:10:53.393 11:14:34 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:53.393 11:14:34 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:10:53.393 11:14:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:53.393 11:14:34 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:10:53.393 11:14:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:53.393 11:14:34 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:53.393 11:14:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:53.393 11:14:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:53.393 11:14:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:53.393 11:14:34 -- target/tls.sh@23 -- # psk= 00:10:53.393 11:14:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:53.393 11:14:34 -- target/tls.sh@28 -- # bdevperf_pid=64771 00:10:53.393 11:14:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:53.393 11:14:34 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:53.393 11:14:34 -- target/tls.sh@31 -- # waitforlisten 64771 /var/tmp/bdevperf.sock 00:10:53.393 11:14:34 -- common/autotest_common.sh@819 -- # '[' -z 64771 ']' 00:10:53.393 11:14:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:53.393 11:14:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:53.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:53.393 11:14:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:53.393 11:14:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:53.393 11:14:34 -- common/autotest_common.sh@10 -- # set +x 00:10:53.652 [2024-10-13 11:14:35.035564] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:53.652 [2024-10-13 11:14:35.035666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64771 ] 00:10:53.652 [2024-10-13 11:14:35.174815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.652 [2024-10-13 11:14:35.227405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.588 11:14:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:54.588 11:14:35 -- common/autotest_common.sh@852 -- # return 0 00:10:54.588 11:14:35 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:10:54.847 [2024-10-13 11:14:36.256294] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:54.847 [2024-10-13 11:14:36.257651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5a010 (9): Bad file descriptor 00:10:54.847 [2024-10-13 11:14:36.258628] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:54.847 [2024-10-13 11:14:36.258650] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:54.847 [2024-10-13 11:14:36.258659] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:54.847 request: 00:10:54.847 { 00:10:54.847 "name": "TLSTEST", 00:10:54.847 "trtype": "tcp", 00:10:54.847 "traddr": "10.0.0.2", 00:10:54.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:54.847 "adrfam": "ipv4", 00:10:54.847 "trsvcid": "4420", 00:10:54.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.847 "method": "bdev_nvme_attach_controller", 00:10:54.847 "req_id": 1 00:10:54.847 } 00:10:54.847 Got JSON-RPC error response 00:10:54.847 response: 00:10:54.847 { 00:10:54.847 "code": -32602, 00:10:54.847 "message": "Invalid parameters" 00:10:54.847 } 00:10:54.847 11:14:36 -- target/tls.sh@36 -- # killprocess 64771 00:10:54.847 11:14:36 -- common/autotest_common.sh@926 -- # '[' -z 64771 ']' 00:10:54.847 11:14:36 -- common/autotest_common.sh@930 -- # kill -0 64771 00:10:54.847 11:14:36 -- common/autotest_common.sh@931 -- # uname 00:10:54.847 11:14:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:54.847 11:14:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64771 00:10:54.847 11:14:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:54.847 11:14:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:54.847 killing process with pid 64771 00:10:54.847 11:14:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64771' 00:10:54.847 Received shutdown signal, test time was about 10.000000 seconds 00:10:54.847 00:10:54.847 Latency(us) 00:10:54.847 [2024-10-13T11:14:36.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.847 [2024-10-13T11:14:36.449Z] =================================================================================================================== 00:10:54.847 [2024-10-13T11:14:36.449Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:54.847 11:14:36 -- common/autotest_common.sh@945 -- # kill 64771 00:10:54.847 11:14:36 -- common/autotest_common.sh@950 -- # wait 64771 00:10:55.106 11:14:36 -- target/tls.sh@37 -- # return 1 00:10:55.106 11:14:36 -- common/autotest_common.sh@643 -- # es=1 00:10:55.106 11:14:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:55.106 11:14:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:55.106 11:14:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:55.106 11:14:36 -- target/tls.sh@167 -- # killprocess 64320 00:10:55.106 11:14:36 -- common/autotest_common.sh@926 -- # '[' -z 64320 ']' 00:10:55.106 11:14:36 -- common/autotest_common.sh@930 -- # kill -0 64320 00:10:55.106 11:14:36 -- common/autotest_common.sh@931 -- # uname 00:10:55.106 11:14:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:55.106 11:14:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64320 00:10:55.106 11:14:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:10:55.106 11:14:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:10:55.106 killing process with pid 64320 00:10:55.106 11:14:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64320' 00:10:55.106 11:14:36 -- common/autotest_common.sh@945 -- # kill 64320 00:10:55.106 11:14:36 -- common/autotest_common.sh@950 -- # wait 64320 00:10:55.106 11:14:36 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:10:55.106 11:14:36 -- target/tls.sh@49 -- # local key hash crc 00:10:55.106 11:14:36 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:10:55.106 11:14:36 -- target/tls.sh@51 -- # hash=02 00:10:55.365 11:14:36 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:10:55.365 11:14:36 -- target/tls.sh@52 -- # gzip -1 -c 00:10:55.365 11:14:36 -- target/tls.sh@52 -- # tail -c8 00:10:55.365 11:14:36 -- target/tls.sh@52 -- # head -c 4 00:10:55.365 11:14:36 -- target/tls.sh@52 -- # crc='�e�'\''' 00:10:55.365 11:14:36 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:55.365 11:14:36 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:10:55.365 11:14:36 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:55.365 11:14:36 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:55.365 11:14:36 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:55.365 11:14:36 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:55.365 11:14:36 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:55.365 11:14:36 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:10:55.365 11:14:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:55.365 11:14:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:55.365 11:14:36 -- common/autotest_common.sh@10 -- # set +x 00:10:55.365 11:14:36 -- nvmf/common.sh@469 -- # nvmfpid=64812 00:10:55.365 11:14:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:55.365 11:14:36 -- nvmf/common.sh@470 -- # waitforlisten 64812 00:10:55.365 11:14:36 -- common/autotest_common.sh@819 -- # '[' -z 64812 ']' 00:10:55.365 11:14:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.365 11:14:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:55.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.365 11:14:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.365 11:14:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:55.365 11:14:36 -- common/autotest_common.sh@10 -- # set +x 00:10:55.365 [2024-10-13 11:14:36.772797] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:55.366 [2024-10-13 11:14:36.772882] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.366 [2024-10-13 11:14:36.903484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.366 [2024-10-13 11:14:36.956091] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:55.366 [2024-10-13 11:14:36.956244] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.366 [2024-10-13 11:14:36.956257] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.366 [2024-10-13 11:14:36.956264] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.366 [2024-10-13 11:14:36.956290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.341 11:14:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:56.341 11:14:37 -- common/autotest_common.sh@852 -- # return 0 00:10:56.341 11:14:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:56.341 11:14:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:56.341 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:10:56.341 11:14:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.341 11:14:37 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:56.341 11:14:37 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:56.341 11:14:37 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:56.599 [2024-10-13 11:14:38.054839] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.599 11:14:38 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:56.857 11:14:38 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:57.115 [2024-10-13 11:14:38.586990] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:57.115 [2024-10-13 11:14:38.587278] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.115 11:14:38 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:57.374 malloc0 00:10:57.374 11:14:38 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:57.633 11:14:39 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:57.890 11:14:39 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:57.890 11:14:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:57.890 11:14:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:57.890 11:14:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:57.890 11:14:39 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:10:57.890 11:14:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:57.890 11:14:39 -- target/tls.sh@28 -- # bdevperf_pid=64868 00:10:57.890 11:14:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:57.890 11:14:39 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:57.890 11:14:39 -- target/tls.sh@31 -- # waitforlisten 64868 /var/tmp/bdevperf.sock 00:10:57.890 11:14:39 -- common/autotest_common.sh@819 -- # '[' -z 64868 ']' 00:10:57.890 11:14:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:57.890 11:14:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:57.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:57.890 11:14:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:57.891 11:14:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:57.891 11:14:39 -- common/autotest_common.sh@10 -- # set +x 00:10:57.891 [2024-10-13 11:14:39.337860] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:57.891 [2024-10-13 11:14:39.337943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64868 ] 00:10:57.891 [2024-10-13 11:14:39.473829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.148 [2024-10-13 11:14:39.542068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.714 11:14:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:58.714 11:14:40 -- common/autotest_common.sh@852 -- # return 0 00:10:58.714 11:14:40 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:58.972 [2024-10-13 11:14:40.472311] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:58.972 TLSTESTn1 00:10:58.972 11:14:40 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:10:59.231 Running I/O for 10 seconds... 00:11:09.230 00:11:09.230 Latency(us) 00:11:09.230 [2024-10-13T11:14:50.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.230 [2024-10-13T11:14:50.832Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:09.230 Verification LBA range: start 0x0 length 0x2000 00:11:09.230 TLSTESTn1 : 10.01 6269.42 24.49 0.00 0.00 20384.68 4140.68 26929.34 00:11:09.230 [2024-10-13T11:14:50.832Z] =================================================================================================================== 00:11:09.230 [2024-10-13T11:14:50.832Z] Total : 6269.42 24.49 0.00 0.00 20384.68 4140.68 26929.34 00:11:09.230 0 00:11:09.230 11:14:50 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:09.230 11:14:50 -- target/tls.sh@45 -- # killprocess 64868 00:11:09.230 11:14:50 -- common/autotest_common.sh@926 -- # '[' -z 64868 ']' 00:11:09.230 11:14:50 -- common/autotest_common.sh@930 -- # kill -0 64868 00:11:09.230 11:14:50 -- common/autotest_common.sh@931 -- # uname 00:11:09.230 11:14:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:09.230 11:14:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64868 00:11:09.230 killing process with pid 64868 00:11:09.230 Received shutdown signal, test time was about 10.000000 seconds 00:11:09.230 00:11:09.230 Latency(us) 00:11:09.230 [2024-10-13T11:14:50.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.230 [2024-10-13T11:14:50.832Z] =================================================================================================================== 00:11:09.230 [2024-10-13T11:14:50.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:09.230 11:14:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:09.230 11:14:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:09.230 11:14:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64868' 00:11:09.230 11:14:50 -- common/autotest_common.sh@945 -- # kill 64868 00:11:09.230 11:14:50 -- common/autotest_common.sh@950 -- # wait 64868 00:11:09.490 11:14:50 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:09.490 11:14:50 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:09.490 11:14:50 -- common/autotest_common.sh@640 -- # local es=0 00:11:09.490 11:14:50 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:09.490 11:14:50 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:11:09.490 11:14:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:09.490 11:14:50 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:11:09.490 11:14:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:09.490 11:14:50 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:09.490 11:14:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:09.490 11:14:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:09.490 11:14:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:09.490 11:14:50 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:09.490 11:14:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:09.490 11:14:50 -- target/tls.sh@28 -- # bdevperf_pid=64997 00:11:09.490 11:14:50 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:09.490 11:14:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:09.490 11:14:50 -- target/tls.sh@31 -- # waitforlisten 64997 /var/tmp/bdevperf.sock 00:11:09.490 11:14:50 -- common/autotest_common.sh@819 -- # '[' -z 64997 ']' 00:11:09.490 11:14:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:09.490 11:14:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:09.490 11:14:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:09.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:09.490 11:14:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:09.490 11:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:09.490 [2024-10-13 11:14:50.998158] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:09.490 [2024-10-13 11:14:50.998459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64997 ] 00:11:09.750 [2024-10-13 11:14:51.138876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.750 [2024-10-13 11:14:51.190383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.688 11:14:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:10.688 11:14:51 -- common/autotest_common.sh@852 -- # return 0 00:11:10.688 11:14:51 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:10.688 [2024-10-13 11:14:52.133595] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:10.688 [2024-10-13 11:14:52.133641] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:10.688 request: 00:11:10.688 { 00:11:10.688 "name": "TLSTEST", 00:11:10.688 "trtype": "tcp", 00:11:10.688 "traddr": "10.0.0.2", 00:11:10.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:10.688 "adrfam": "ipv4", 00:11:10.688 "trsvcid": "4420", 00:11:10.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:10.688 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:10.688 "method": "bdev_nvme_attach_controller", 00:11:10.688 "req_id": 1 00:11:10.688 } 00:11:10.688 Got JSON-RPC error response 00:11:10.688 response: 00:11:10.688 { 00:11:10.688 "code": -22, 00:11:10.688 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:10.688 } 00:11:10.688 11:14:52 -- target/tls.sh@36 -- # killprocess 64997 00:11:10.688 11:14:52 -- common/autotest_common.sh@926 -- # '[' -z 64997 ']' 00:11:10.688 11:14:52 -- common/autotest_common.sh@930 -- # kill -0 64997 00:11:10.688 11:14:52 -- common/autotest_common.sh@931 -- # uname 00:11:10.688 11:14:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:10.688 11:14:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64997 00:11:10.688 killing process with pid 64997 00:11:10.688 Received shutdown signal, test time was about 10.000000 seconds 00:11:10.688 00:11:10.688 Latency(us) 00:11:10.688 [2024-10-13T11:14:52.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.688 [2024-10-13T11:14:52.290Z] =================================================================================================================== 00:11:10.688 [2024-10-13T11:14:52.290Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:10.688 11:14:52 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:10.688 11:14:52 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:10.688 11:14:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64997' 00:11:10.688 11:14:52 -- common/autotest_common.sh@945 -- # kill 64997 00:11:10.688 11:14:52 -- common/autotest_common.sh@950 -- # wait 64997 00:11:10.947 11:14:52 -- target/tls.sh@37 -- # return 1 00:11:10.947 11:14:52 -- common/autotest_common.sh@643 -- # es=1 00:11:10.947 11:14:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:10.947 11:14:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:10.947 11:14:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:10.947 11:14:52 -- target/tls.sh@183 -- # killprocess 64812 00:11:10.947 11:14:52 -- common/autotest_common.sh@926 -- # '[' -z 64812 ']' 00:11:10.947 11:14:52 -- common/autotest_common.sh@930 -- # kill -0 64812 00:11:10.947 11:14:52 -- common/autotest_common.sh@931 -- # uname 00:11:10.947 11:14:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:10.947 11:14:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64812 00:11:10.947 killing process with pid 64812 00:11:10.947 11:14:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:10.947 11:14:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:10.947 11:14:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64812' 00:11:10.947 11:14:52 -- common/autotest_common.sh@945 -- # kill 64812 00:11:10.947 11:14:52 -- common/autotest_common.sh@950 -- # wait 64812 00:11:11.206 11:14:52 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:11:11.206 11:14:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:11.206 11:14:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:11.206 11:14:52 -- common/autotest_common.sh@10 -- # set +x 00:11:11.206 11:14:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:11.206 11:14:52 -- nvmf/common.sh@469 -- # nvmfpid=65035 00:11:11.206 11:14:52 -- nvmf/common.sh@470 -- # waitforlisten 65035 00:11:11.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.206 11:14:52 -- common/autotest_common.sh@819 -- # '[' -z 65035 ']' 00:11:11.206 11:14:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.206 11:14:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:11.206 11:14:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.206 11:14:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:11.206 11:14:52 -- common/autotest_common.sh@10 -- # set +x 00:11:11.206 [2024-10-13 11:14:52.626863] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:11.206 [2024-10-13 11:14:52.627154] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.206 [2024-10-13 11:14:52.767930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.465 [2024-10-13 11:14:52.816930] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:11.465 [2024-10-13 11:14:52.817292] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.465 [2024-10-13 11:14:52.817382] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.465 [2024-10-13 11:14:52.817396] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.465 [2024-10-13 11:14:52.817428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.465 11:14:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:11.465 11:14:52 -- common/autotest_common.sh@852 -- # return 0 00:11:11.465 11:14:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:11.465 11:14:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:11.465 11:14:52 -- common/autotest_common.sh@10 -- # set +x 00:11:11.465 11:14:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.465 11:14:52 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:11.465 11:14:52 -- common/autotest_common.sh@640 -- # local es=0 00:11:11.465 11:14:52 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:11.465 11:14:52 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:11:11.465 11:14:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:11.465 11:14:52 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:11:11.465 11:14:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:11.465 11:14:52 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:11.465 11:14:52 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:11.465 11:14:52 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:11.724 [2024-10-13 11:14:53.199409] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.724 11:14:53 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:11.983 11:14:53 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:12.241 [2024-10-13 11:14:53.623514] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:12.241 [2024-10-13 11:14:53.623725] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.241 11:14:53 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:12.499 malloc0 00:11:12.499 11:14:53 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:12.758 11:14:54 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:12.758 [2024-10-13 11:14:54.304968] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:12.758 [2024-10-13 11:14:54.305174] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:11:12.758 [2024-10-13 11:14:54.305202] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:11:12.758 request: 00:11:12.758 { 00:11:12.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:12.758 "host": "nqn.2016-06.io.spdk:host1", 00:11:12.758 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:12.758 "method": "nvmf_subsystem_add_host", 00:11:12.758 "req_id": 1 00:11:12.758 } 00:11:12.758 Got JSON-RPC error response 00:11:12.759 response: 00:11:12.759 { 00:11:12.759 "code": -32603, 00:11:12.759 "message": "Internal error" 00:11:12.759 } 00:11:12.759 11:14:54 -- common/autotest_common.sh@643 -- # es=1 00:11:12.759 11:14:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:12.759 11:14:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:12.759 11:14:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:12.759 11:14:54 -- target/tls.sh@189 -- # killprocess 65035 00:11:12.759 11:14:54 -- common/autotest_common.sh@926 -- # '[' -z 65035 ']' 00:11:12.759 11:14:54 -- common/autotest_common.sh@930 -- # kill -0 65035 00:11:12.759 11:14:54 -- common/autotest_common.sh@931 -- # uname 00:11:12.759 11:14:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:12.759 11:14:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65035 00:11:13.018 killing process with pid 65035 00:11:13.018 11:14:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:13.018 11:14:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:13.018 11:14:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65035' 00:11:13.018 11:14:54 -- common/autotest_common.sh@945 -- # kill 65035 00:11:13.018 11:14:54 -- common/autotest_common.sh@950 -- # wait 65035 00:11:13.018 11:14:54 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:13.018 11:14:54 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:11:13.018 11:14:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:13.018 11:14:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:13.018 11:14:54 -- common/autotest_common.sh@10 -- # set +x 00:11:13.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.018 11:14:54 -- nvmf/common.sh@469 -- # nvmfpid=65090 00:11:13.018 11:14:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:13.018 11:14:54 -- nvmf/common.sh@470 -- # waitforlisten 65090 00:11:13.018 11:14:54 -- common/autotest_common.sh@819 -- # '[' -z 65090 ']' 00:11:13.018 11:14:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.018 11:14:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:13.018 11:14:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.018 11:14:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:13.018 11:14:54 -- common/autotest_common.sh@10 -- # set +x 00:11:13.018 [2024-10-13 11:14:54.601886] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:13.018 [2024-10-13 11:14:54.602277] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.277 [2024-10-13 11:14:54.738455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.277 [2024-10-13 11:14:54.787594] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:13.277 [2024-10-13 11:14:54.787983] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.277 [2024-10-13 11:14:54.788033] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.277 [2024-10-13 11:14:54.788137] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.277 [2024-10-13 11:14:54.788194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.214 11:14:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:14.214 11:14:55 -- common/autotest_common.sh@852 -- # return 0 00:11:14.214 11:14:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:14.214 11:14:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:14.214 11:14:55 -- common/autotest_common.sh@10 -- # set +x 00:11:14.214 11:14:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.214 11:14:55 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:14.214 11:14:55 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:14.214 11:14:55 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:14.214 [2024-10-13 11:14:55.766680] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.214 11:14:55 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:14.472 11:14:55 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:14.731 [2024-10-13 11:14:56.182765] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:14.731 [2024-10-13 11:14:56.183009] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.731 11:14:56 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:14.990 malloc0 00:11:14.990 11:14:56 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:15.249 11:14:56 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:15.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:15.508 11:14:56 -- target/tls.sh@197 -- # bdevperf_pid=65139 00:11:15.508 11:14:56 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:15.508 11:14:56 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:15.508 11:14:56 -- target/tls.sh@200 -- # waitforlisten 65139 /var/tmp/bdevperf.sock 00:11:15.508 11:14:56 -- common/autotest_common.sh@819 -- # '[' -z 65139 ']' 00:11:15.508 11:14:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:15.508 11:14:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:15.508 11:14:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:15.508 11:14:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:15.508 11:14:56 -- common/autotest_common.sh@10 -- # set +x 00:11:15.508 [2024-10-13 11:14:56.988007] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:15.508 [2024-10-13 11:14:56.988266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65139 ] 00:11:15.767 [2024-10-13 11:14:57.125949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.767 [2024-10-13 11:14:57.195690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.336 11:14:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:16.336 11:14:57 -- common/autotest_common.sh@852 -- # return 0 00:11:16.336 11:14:57 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:16.594 [2024-10-13 11:14:58.105081] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:16.594 TLSTESTn1 00:11:16.594 11:14:58 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:17.160 11:14:58 -- target/tls.sh@205 -- # tgtconf='{ 00:11:17.160 "subsystems": [ 00:11:17.160 { 00:11:17.160 "subsystem": "iobuf", 00:11:17.160 "config": [ 00:11:17.160 { 00:11:17.160 "method": "iobuf_set_options", 00:11:17.160 "params": { 00:11:17.160 "small_pool_count": 8192, 00:11:17.160 "large_pool_count": 1024, 00:11:17.160 "small_bufsize": 8192, 00:11:17.160 "large_bufsize": 135168 00:11:17.160 } 00:11:17.160 } 00:11:17.160 ] 00:11:17.160 }, 00:11:17.160 { 00:11:17.160 "subsystem": "sock", 00:11:17.160 "config": [ 00:11:17.160 { 00:11:17.160 "method": "sock_impl_set_options", 00:11:17.160 "params": { 00:11:17.160 "impl_name": "uring", 00:11:17.160 "recv_buf_size": 2097152, 00:11:17.160 "send_buf_size": 2097152, 00:11:17.160 "enable_recv_pipe": true, 00:11:17.160 "enable_quickack": false, 00:11:17.160 "enable_placement_id": 0, 00:11:17.160 "enable_zerocopy_send_server": false, 00:11:17.160 "enable_zerocopy_send_client": false, 00:11:17.160 "zerocopy_threshold": 0, 00:11:17.160 "tls_version": 0, 00:11:17.160 "enable_ktls": false 00:11:17.160 } 00:11:17.160 }, 00:11:17.160 { 00:11:17.160 "method": "sock_impl_set_options", 00:11:17.160 "params": { 00:11:17.161 "impl_name": "posix", 00:11:17.161 "recv_buf_size": 2097152, 00:11:17.161 "send_buf_size": 2097152, 00:11:17.161 "enable_recv_pipe": true, 00:11:17.161 "enable_quickack": false, 00:11:17.161 "enable_placement_id": 0, 00:11:17.161 "enable_zerocopy_send_server": true, 00:11:17.161 "enable_zerocopy_send_client": false, 00:11:17.161 "zerocopy_threshold": 0, 00:11:17.161 "tls_version": 0, 00:11:17.161 "enable_ktls": false 00:11:17.161 } 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "method": "sock_impl_set_options", 00:11:17.161 "params": { 00:11:17.161 "impl_name": "ssl", 00:11:17.161 "recv_buf_size": 4096, 00:11:17.161 "send_buf_size": 4096, 00:11:17.161 "enable_recv_pipe": true, 00:11:17.161 "enable_quickack": false, 00:11:17.161 "enable_placement_id": 0, 00:11:17.161 "enable_zerocopy_send_server": true, 00:11:17.161 "enable_zerocopy_send_client": false, 00:11:17.161 "zerocopy_threshold": 0, 00:11:17.161 "tls_version": 0, 00:11:17.161 "enable_ktls": false 00:11:17.161 } 00:11:17.161 } 00:11:17.161 ] 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "subsystem": "vmd", 00:11:17.161 "config": [] 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "subsystem": "accel", 00:11:17.161 "config": [ 00:11:17.161 { 00:11:17.161 "method": "accel_set_options", 00:11:17.161 "params": { 00:11:17.161 "small_cache_size": 128, 00:11:17.161 "large_cache_size": 16, 00:11:17.161 "task_count": 2048, 00:11:17.161 "sequence_count": 2048, 00:11:17.161 "buf_count": 2048 00:11:17.161 } 00:11:17.161 } 00:11:17.161 ] 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "subsystem": "bdev", 00:11:17.161 "config": [ 00:11:17.161 { 00:11:17.161 "method": "bdev_set_options", 00:11:17.161 "params": { 00:11:17.161 "bdev_io_pool_size": 65535, 00:11:17.161 "bdev_io_cache_size": 256, 00:11:17.161 "bdev_auto_examine": true, 00:11:17.161 "iobuf_small_cache_size": 128, 00:11:17.161 "iobuf_large_cache_size": 16 00:11:17.161 } 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "method": "bdev_raid_set_options", 00:11:17.161 "params": { 00:11:17.161 "process_window_size_kb": 1024 00:11:17.161 } 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "method": "bdev_iscsi_set_options", 00:11:17.161 "params": { 00:11:17.161 "timeout_sec": 30 00:11:17.161 } 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "method": "bdev_nvme_set_options", 00:11:17.161 "params": { 00:11:17.161 "action_on_timeout": "none", 00:11:17.161 "timeout_us": 0, 00:11:17.161 "timeout_admin_us": 0, 00:11:17.161 "keep_alive_timeout_ms": 10000, 00:11:17.161 "transport_retry_count": 4, 00:11:17.161 "arbitration_burst": 0, 00:11:17.161 "low_priority_weight": 0, 00:11:17.161 "medium_priority_weight": 0, 00:11:17.161 "high_priority_weight": 0, 00:11:17.161 "nvme_adminq_poll_period_us": 10000, 00:11:17.161 "nvme_ioq_poll_period_us": 0, 00:11:17.161 "io_queue_requests": 0, 00:11:17.161 "delay_cmd_submit": true, 00:11:17.161 "bdev_retry_count": 3, 00:11:17.161 "transport_ack_timeout": 0, 00:11:17.161 "ctrlr_loss_timeout_sec": 0, 00:11:17.161 "reconnect_delay_sec": 0, 00:11:17.161 "fast_io_fail_timeout_sec": 0, 00:11:17.161 "generate_uuids": false, 00:11:17.161 "transport_tos": 0, 00:11:17.161 "io_path_stat": false, 00:11:17.161 "allow_accel_sequence": false 00:11:17.161 } 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "method": "bdev_nvme_set_hotplug", 00:11:17.161 "params": { 00:11:17.161 "period_us": 100000, 00:11:17.161 "enable": false 00:11:17.161 } 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "method": "bdev_malloc_create", 00:11:17.161 "params": { 00:11:17.161 "name": "malloc0", 00:11:17.161 "num_blocks": 8192, 00:11:17.161 "block_size": 4096, 00:11:17.161 "physical_block_size": 4096, 00:11:17.161 "uuid": "c6ed8ed6-7d05-4543-ab2e-538bba668c88", 00:11:17.161 "optimal_io_boundary": 0 00:11:17.161 } 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "method": "bdev_wait_for_examine" 00:11:17.161 } 00:11:17.161 ] 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "subsystem": "nbd", 00:11:17.161 "config": [] 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "subsystem": "scheduler", 00:11:17.161 "config": [ 00:11:17.161 { 00:11:17.161 "method": "framework_set_scheduler", 00:11:17.161 "params": { 00:11:17.161 "name": "static" 00:11:17.161 } 00:11:17.161 } 00:11:17.161 ] 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "subsystem": "nvmf", 00:11:17.161 "config": [ 00:11:17.161 { 00:11:17.161 "method": "nvmf_set_config", 00:11:17.161 "params": { 00:11:17.161 "discovery_filter": "match_any", 00:11:17.161 "admin_cmd_passthru": { 00:11:17.161 "identify_ctrlr": false 00:11:17.161 } 00:11:17.161 } 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "method": "nvmf_set_max_subsystems", 00:11:17.161 "params": { 00:11:17.161 "max_subsystems": 1024 00:11:17.161 } 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "method": "nvmf_set_crdt", 00:11:17.161 "params": { 00:11:17.161 "crdt1": 0, 00:11:17.161 "crdt2": 0, 00:11:17.161 "crdt3": 0 00:11:17.161 } 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "method": "nvmf_create_transport", 00:11:17.161 "params": { 00:11:17.161 "trtype": "TCP", 00:11:17.161 "max_queue_depth": 128, 00:11:17.161 "max_io_qpairs_per_ctrlr": 127, 00:11:17.161 "in_capsule_data_size": 4096, 00:11:17.161 "max_io_size": 131072, 00:11:17.161 "io_unit_size": 131072, 00:11:17.161 "max_aq_depth": 128, 00:11:17.161 "num_shared_buffers": 511, 00:11:17.161 "buf_cache_size": 4294967295, 00:11:17.161 "dif_insert_or_strip": false, 00:11:17.161 "zcopy": false, 00:11:17.161 "c2h_success": false, 00:11:17.161 "sock_priority": 0, 00:11:17.161 "abort_timeout_sec": 1 00:11:17.161 } 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "method": "nvmf_create_subsystem", 00:11:17.161 "params": { 00:11:17.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.161 "allow_any_host": false, 00:11:17.161 "serial_number": "SPDK00000000000001", 00:11:17.161 "model_number": "SPDK bdev Controller", 00:11:17.161 "max_namespaces": 10, 00:11:17.161 "min_cntlid": 1, 00:11:17.161 "max_cntlid": 65519, 00:11:17.161 "ana_reporting": false 00:11:17.161 } 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "method": "nvmf_subsystem_add_host", 00:11:17.161 "params": { 00:11:17.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.161 "host": "nqn.2016-06.io.spdk:host1", 00:11:17.161 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:17.161 } 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "method": "nvmf_subsystem_add_ns", 00:11:17.161 "params": { 00:11:17.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.161 "namespace": { 00:11:17.161 "nsid": 1, 00:11:17.161 "bdev_name": "malloc0", 00:11:17.161 "nguid": "C6ED8ED67D054543AB2E538BBA668C88", 00:11:17.161 "uuid": "c6ed8ed6-7d05-4543-ab2e-538bba668c88" 00:11:17.161 } 00:11:17.161 } 00:11:17.161 }, 00:11:17.161 { 00:11:17.161 "method": "nvmf_subsystem_add_listener", 00:11:17.161 "params": { 00:11:17.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.161 "listen_address": { 00:11:17.161 "trtype": "TCP", 00:11:17.161 "adrfam": "IPv4", 00:11:17.161 "traddr": "10.0.0.2", 00:11:17.161 "trsvcid": "4420" 00:11:17.161 }, 00:11:17.161 "secure_channel": true 00:11:17.161 } 00:11:17.161 } 00:11:17.161 ] 00:11:17.161 } 00:11:17.161 ] 00:11:17.161 }' 00:11:17.161 11:14:58 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:11:17.421 11:14:58 -- target/tls.sh@206 -- # bdevperfconf='{ 00:11:17.421 "subsystems": [ 00:11:17.421 { 00:11:17.421 "subsystem": "iobuf", 00:11:17.421 "config": [ 00:11:17.421 { 00:11:17.421 "method": "iobuf_set_options", 00:11:17.421 "params": { 00:11:17.421 "small_pool_count": 8192, 00:11:17.421 "large_pool_count": 1024, 00:11:17.421 "small_bufsize": 8192, 00:11:17.421 "large_bufsize": 135168 00:11:17.421 } 00:11:17.421 } 00:11:17.421 ] 00:11:17.421 }, 00:11:17.421 { 00:11:17.421 "subsystem": "sock", 00:11:17.421 "config": [ 00:11:17.421 { 00:11:17.421 "method": "sock_impl_set_options", 00:11:17.421 "params": { 00:11:17.421 "impl_name": "uring", 00:11:17.421 "recv_buf_size": 2097152, 00:11:17.421 "send_buf_size": 2097152, 00:11:17.421 "enable_recv_pipe": true, 00:11:17.421 "enable_quickack": false, 00:11:17.421 "enable_placement_id": 0, 00:11:17.421 "enable_zerocopy_send_server": false, 00:11:17.421 "enable_zerocopy_send_client": false, 00:11:17.421 "zerocopy_threshold": 0, 00:11:17.421 "tls_version": 0, 00:11:17.421 "enable_ktls": false 00:11:17.421 } 00:11:17.421 }, 00:11:17.421 { 00:11:17.421 "method": "sock_impl_set_options", 00:11:17.421 "params": { 00:11:17.421 "impl_name": "posix", 00:11:17.421 "recv_buf_size": 2097152, 00:11:17.421 "send_buf_size": 2097152, 00:11:17.421 "enable_recv_pipe": true, 00:11:17.421 "enable_quickack": false, 00:11:17.421 "enable_placement_id": 0, 00:11:17.421 "enable_zerocopy_send_server": true, 00:11:17.421 "enable_zerocopy_send_client": false, 00:11:17.421 "zerocopy_threshold": 0, 00:11:17.421 "tls_version": 0, 00:11:17.421 "enable_ktls": false 00:11:17.421 } 00:11:17.421 }, 00:11:17.421 { 00:11:17.421 "method": "sock_impl_set_options", 00:11:17.421 "params": { 00:11:17.421 "impl_name": "ssl", 00:11:17.421 "recv_buf_size": 4096, 00:11:17.421 "send_buf_size": 4096, 00:11:17.421 "enable_recv_pipe": true, 00:11:17.421 "enable_quickack": false, 00:11:17.421 "enable_placement_id": 0, 00:11:17.421 "enable_zerocopy_send_server": true, 00:11:17.421 "enable_zerocopy_send_client": false, 00:11:17.421 "zerocopy_threshold": 0, 00:11:17.421 "tls_version": 0, 00:11:17.421 "enable_ktls": false 00:11:17.421 } 00:11:17.421 } 00:11:17.421 ] 00:11:17.421 }, 00:11:17.421 { 00:11:17.421 "subsystem": "vmd", 00:11:17.421 "config": [] 00:11:17.421 }, 00:11:17.421 { 00:11:17.421 "subsystem": "accel", 00:11:17.421 "config": [ 00:11:17.421 { 00:11:17.421 "method": "accel_set_options", 00:11:17.421 "params": { 00:11:17.421 "small_cache_size": 128, 00:11:17.421 "large_cache_size": 16, 00:11:17.421 "task_count": 2048, 00:11:17.421 "sequence_count": 2048, 00:11:17.421 "buf_count": 2048 00:11:17.421 } 00:11:17.421 } 00:11:17.421 ] 00:11:17.421 }, 00:11:17.421 { 00:11:17.421 "subsystem": "bdev", 00:11:17.421 "config": [ 00:11:17.421 { 00:11:17.421 "method": "bdev_set_options", 00:11:17.421 "params": { 00:11:17.421 "bdev_io_pool_size": 65535, 00:11:17.421 "bdev_io_cache_size": 256, 00:11:17.421 "bdev_auto_examine": true, 00:11:17.421 "iobuf_small_cache_size": 128, 00:11:17.421 "iobuf_large_cache_size": 16 00:11:17.421 } 00:11:17.421 }, 00:11:17.421 { 00:11:17.421 "method": "bdev_raid_set_options", 00:11:17.421 "params": { 00:11:17.421 "process_window_size_kb": 1024 00:11:17.421 } 00:11:17.421 }, 00:11:17.421 { 00:11:17.421 "method": "bdev_iscsi_set_options", 00:11:17.421 "params": { 00:11:17.421 "timeout_sec": 30 00:11:17.421 } 00:11:17.421 }, 00:11:17.421 { 00:11:17.421 "method": "bdev_nvme_set_options", 00:11:17.421 "params": { 00:11:17.421 "action_on_timeout": "none", 00:11:17.421 "timeout_us": 0, 00:11:17.421 "timeout_admin_us": 0, 00:11:17.421 "keep_alive_timeout_ms": 10000, 00:11:17.421 "transport_retry_count": 4, 00:11:17.421 "arbitration_burst": 0, 00:11:17.421 "low_priority_weight": 0, 00:11:17.421 "medium_priority_weight": 0, 00:11:17.421 "high_priority_weight": 0, 00:11:17.421 "nvme_adminq_poll_period_us": 10000, 00:11:17.421 "nvme_ioq_poll_period_us": 0, 00:11:17.421 "io_queue_requests": 512, 00:11:17.421 "delay_cmd_submit": true, 00:11:17.421 "bdev_retry_count": 3, 00:11:17.421 "transport_ack_timeout": 0, 00:11:17.421 "ctrlr_loss_timeout_sec": 0, 00:11:17.421 "reconnect_delay_sec": 0, 00:11:17.421 "fast_io_fail_timeout_sec": 0, 00:11:17.421 "generate_uuids": false, 00:11:17.421 "transport_tos": 0, 00:11:17.421 "io_path_stat": false, 00:11:17.421 "allow_accel_sequence": false 00:11:17.421 } 00:11:17.421 }, 00:11:17.421 { 00:11:17.421 "method": "bdev_nvme_attach_controller", 00:11:17.421 "params": { 00:11:17.421 "name": "TLSTEST", 00:11:17.421 "trtype": "TCP", 00:11:17.422 "adrfam": "IPv4", 00:11:17.422 "traddr": "10.0.0.2", 00:11:17.422 "trsvcid": "4420", 00:11:17.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.422 "prchk_reftag": false, 00:11:17.422 "prchk_guard": false, 00:11:17.422 "ctrlr_loss_timeout_sec": 0, 00:11:17.422 "reconnect_delay_sec": 0, 00:11:17.422 "fast_io_fail_timeout_sec": 0, 00:11:17.422 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:17.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:17.422 "hdgst": false, 00:11:17.422 "ddgst": false 00:11:17.422 } 00:11:17.422 }, 00:11:17.422 { 00:11:17.422 "method": "bdev_nvme_set_hotplug", 00:11:17.422 "params": { 00:11:17.422 "period_us": 100000, 00:11:17.422 "enable": false 00:11:17.422 } 00:11:17.422 }, 00:11:17.422 { 00:11:17.422 "method": "bdev_wait_for_examine" 00:11:17.422 } 00:11:17.422 ] 00:11:17.422 }, 00:11:17.422 { 00:11:17.422 "subsystem": "nbd", 00:11:17.422 "config": [] 00:11:17.422 } 00:11:17.422 ] 00:11:17.422 }' 00:11:17.422 11:14:58 -- target/tls.sh@208 -- # killprocess 65139 00:11:17.422 11:14:58 -- common/autotest_common.sh@926 -- # '[' -z 65139 ']' 00:11:17.422 11:14:58 -- common/autotest_common.sh@930 -- # kill -0 65139 00:11:17.422 11:14:58 -- common/autotest_common.sh@931 -- # uname 00:11:17.422 11:14:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:17.422 11:14:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65139 00:11:17.422 killing process with pid 65139 00:11:17.422 Received shutdown signal, test time was about 10.000000 seconds 00:11:17.422 00:11:17.422 Latency(us) 00:11:17.422 [2024-10-13T11:14:59.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:17.422 [2024-10-13T11:14:59.024Z] =================================================================================================================== 00:11:17.422 [2024-10-13T11:14:59.024Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:17.422 11:14:58 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:17.422 11:14:58 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:17.422 11:14:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65139' 00:11:17.422 11:14:58 -- common/autotest_common.sh@945 -- # kill 65139 00:11:17.422 11:14:58 -- common/autotest_common.sh@950 -- # wait 65139 00:11:17.681 11:14:59 -- target/tls.sh@209 -- # killprocess 65090 00:11:17.681 11:14:59 -- common/autotest_common.sh@926 -- # '[' -z 65090 ']' 00:11:17.681 11:14:59 -- common/autotest_common.sh@930 -- # kill -0 65090 00:11:17.681 11:14:59 -- common/autotest_common.sh@931 -- # uname 00:11:17.681 11:14:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:17.681 11:14:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65090 00:11:17.681 killing process with pid 65090 00:11:17.681 11:14:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:17.681 11:14:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:17.681 11:14:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65090' 00:11:17.681 11:14:59 -- common/autotest_common.sh@945 -- # kill 65090 00:11:17.681 11:14:59 -- common/autotest_common.sh@950 -- # wait 65090 00:11:17.681 11:14:59 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:11:17.682 11:14:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:17.682 11:14:59 -- target/tls.sh@212 -- # echo '{ 00:11:17.682 "subsystems": [ 00:11:17.682 { 00:11:17.682 "subsystem": "iobuf", 00:11:17.682 "config": [ 00:11:17.682 { 00:11:17.682 "method": "iobuf_set_options", 00:11:17.682 "params": { 00:11:17.682 "small_pool_count": 8192, 00:11:17.682 "large_pool_count": 1024, 00:11:17.682 "small_bufsize": 8192, 00:11:17.682 "large_bufsize": 135168 00:11:17.682 } 00:11:17.682 } 00:11:17.682 ] 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "subsystem": "sock", 00:11:17.682 "config": [ 00:11:17.682 { 00:11:17.682 "method": "sock_impl_set_options", 00:11:17.682 "params": { 00:11:17.682 "impl_name": "uring", 00:11:17.682 "recv_buf_size": 2097152, 00:11:17.682 "send_buf_size": 2097152, 00:11:17.682 "enable_recv_pipe": true, 00:11:17.682 "enable_quickack": false, 00:11:17.682 "enable_placement_id": 0, 00:11:17.682 "enable_zerocopy_send_server": false, 00:11:17.682 "enable_zerocopy_send_client": false, 00:11:17.682 "zerocopy_threshold": 0, 00:11:17.682 "tls_version": 0, 00:11:17.682 "enable_ktls": false 00:11:17.682 } 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "method": "sock_impl_set_options", 00:11:17.682 "params": { 00:11:17.682 "impl_name": "posix", 00:11:17.682 "recv_buf_size": 2097152, 00:11:17.682 "send_buf_size": 2097152, 00:11:17.682 "enable_recv_pipe": true, 00:11:17.682 "enable_quickack": false, 00:11:17.682 "enable_placement_id": 0, 00:11:17.682 "enable_zerocopy_send_server": true, 00:11:17.682 "enable_zerocopy_send_client": false, 00:11:17.682 "zerocopy_threshold": 0, 00:11:17.682 "tls_version": 0, 00:11:17.682 "enable_ktls": false 00:11:17.682 } 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "method": "sock_impl_set_options", 00:11:17.682 "params": { 00:11:17.682 "impl_name": "ssl", 00:11:17.682 "recv_buf_size": 4096, 00:11:17.682 "send_buf_size": 4096, 00:11:17.682 "enable_recv_pipe": true, 00:11:17.682 "enable_quickack": false, 00:11:17.682 "enable_placement_id": 0, 00:11:17.682 "enable_zerocopy_send_server": true, 00:11:17.682 "enable_zerocopy_send_client": false, 00:11:17.682 "zerocopy_threshold": 0, 00:11:17.682 "tls_version": 0, 00:11:17.682 "enable_ktls": false 00:11:17.682 } 00:11:17.682 } 00:11:17.682 ] 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "subsystem": "vmd", 00:11:17.682 "config": [] 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "subsystem": "accel", 00:11:17.682 "config": [ 00:11:17.682 { 00:11:17.682 "method": "accel_set_options", 00:11:17.682 "params": { 00:11:17.682 "small_cache_size": 128, 00:11:17.682 "large_cache_size": 16, 00:11:17.682 "task_count": 2048, 00:11:17.682 "sequence_count": 2048, 00:11:17.682 "buf_count": 2048 00:11:17.682 } 00:11:17.682 } 00:11:17.682 ] 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "subsystem": "bdev", 00:11:17.682 "config": [ 00:11:17.682 { 00:11:17.682 "method": "bdev_set_options", 00:11:17.682 "params": { 00:11:17.682 "bdev_io_pool_size": 65535, 00:11:17.682 "bdev_io_cache_size": 256, 00:11:17.682 "bdev_auto_examine": true, 00:11:17.682 "iobuf_small_cache_size": 128, 00:11:17.682 "iobuf_large_cache_size": 16 00:11:17.682 } 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "method": "bdev_raid_set_options", 00:11:17.682 "params": { 00:11:17.682 "process_window_size_kb": 1024 00:11:17.682 } 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "method": "bdev_iscsi_set_options", 00:11:17.682 "params": { 00:11:17.682 "timeout_sec": 30 00:11:17.682 } 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "method": "bdev_nvme_set_options", 00:11:17.682 "params": { 00:11:17.682 "action_on_timeout": "none", 00:11:17.682 "timeout_us": 0, 00:11:17.682 "timeout_admin_us": 0, 00:11:17.682 "keep_alive_timeout_ms": 10000, 00:11:17.682 "transport_retry_count": 4, 00:11:17.682 "arbitration_burst": 0, 00:11:17.682 "low_priority_weight": 0, 00:11:17.682 "medium_priority_weight": 0, 00:11:17.682 "high_priority_weight": 0, 00:11:17.682 "nvme_adminq_poll_period_us": 10000, 00:11:17.682 "nvme_ioq_poll_period_us": 0, 00:11:17.682 "io_queue_requests": 0, 00:11:17.682 "delay_cmd_submit": true, 00:11:17.682 "bdev_retry_count": 3, 00:11:17.682 "transport_ack_timeout": 0, 00:11:17.682 "ctrlr_loss_timeout_sec": 0, 00:11:17.682 "reconnect_delay_sec": 0, 00:11:17.682 "fast_io_fail_timeout_sec": 0, 00:11:17.682 "generate_uuids": false, 00:11:17.682 "transport_tos": 0, 00:11:17.682 "io_path_stat": false, 00:11:17.682 "allow_accel_sequence": false 00:11:17.682 } 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "method": "bdev_nvme_set_hotplug", 00:11:17.682 "params": { 00:11:17.682 "period_us": 100000, 00:11:17.682 "enable": false 00:11:17.682 } 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "method": "bdev_malloc_create", 00:11:17.682 "params": { 00:11:17.682 "name": "malloc0", 00:11:17.682 "num_blocks": 8192, 00:11:17.682 "block_size": 4096, 00:11:17.682 "physical_block_size": 4096, 00:11:17.682 "uuid": "c6ed8ed6-7d05-4543-ab2e-538bba668c88", 00:11:17.682 "optimal_io_boundary": 0 00:11:17.682 } 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "method": "bdev_wait_for_examine" 00:11:17.682 } 00:11:17.682 ] 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "subsystem": "nbd", 00:11:17.682 "config": [] 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "subsystem": "scheduler", 00:11:17.682 "config": [ 00:11:17.682 { 00:11:17.682 "method": "framework_set_scheduler", 00:11:17.682 "params": { 00:11:17.682 "name": "static" 00:11:17.682 } 00:11:17.682 } 00:11:17.682 ] 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "subsystem": "nvmf", 00:11:17.682 "config": [ 00:11:17.682 { 00:11:17.682 "method": "nvmf_set_config", 00:11:17.682 "params": { 00:11:17.682 "discovery_filter": "match_any", 00:11:17.682 "admin_cmd_passthru": { 00:11:17.682 "identify_ctrlr": false 00:11:17.682 } 00:11:17.682 } 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "method": "nvmf_set_max_subsystems", 00:11:17.682 "params": { 00:11:17.682 "max_subsystems": 1024 00:11:17.682 } 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "method": "nvmf_set_crdt", 00:11:17.682 "params": { 00:11:17.682 "crdt1": 0, 00:11:17.682 "crdt2": 0, 00:11:17.682 "crdt3": 0 00:11:17.682 } 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "method": "nvmf_create_transport", 00:11:17.682 "params": { 00:11:17.682 "trtype": "TCP", 00:11:17.682 "max_queue_depth": 128, 00:11:17.682 "max_io_qpairs_per_ctrlr": 127, 00:11:17.682 "in_capsule_data_size": 4096, 00:11:17.682 "max_io_size": 131072, 00:11:17.682 "io_unit_size": 131072, 00:11:17.682 "max_aq_depth": 128, 00:11:17.682 "num_shared_buffers": 511, 00:11:17.682 "buf_cache_size": 4294967295, 00:11:17.682 "dif_insert_or_strip": false, 00:11:17.682 "zcopy": false, 00:11:17.682 "c2h_success": false, 00:11:17.682 "sock_priority": 0, 00:11:17.682 "abort_timeout_sec": 1 00:11:17.682 } 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "method": "nvmf_create_subsystem", 00:11:17.682 "params": { 00:11:17.682 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.682 "allow_any_host": false, 00:11:17.682 "serial_number": "SPDK00000000000001", 00:11:17.682 "model_number": "SPDK bdev Controller", 00:11:17.682 "max_namespaces": 10, 00:11:17.682 "min_cntlid": 1, 00:11:17.682 "max_cntlid": 65519, 00:11:17.682 "ana_reporting": false 00:11:17.682 } 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "method": "nvmf_subsystem_add_host", 00:11:17.682 "params": { 00:11:17.682 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.682 "host": "nqn.2016-06.io.spdk:host1", 00:11:17.682 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:17.682 } 00:11:17.682 }, 00:11:17.682 { 00:11:17.682 "method": "nvmf_subsystem_add_ns", 00:11:17.682 "params": { 00:11:17.682 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.682 "namespace": { 00:11:17.682 "nsid": 1, 00:11:17.682 "bdev_name": "malloc0", 00:11:17.682 "nguid": "C6ED8ED67D054543AB2E538BBA668C88", 00:11:17.682 "uuid": "c6ed8ed6-7d05-4543-ab2e-538bba668c88" 00:11:17.682 } 00:11:17.682 } 00:11:17.683 }, 00:11:17.683 { 00:11:17.683 "method": "nvmf_subsystem_add_listener", 00:11:17.683 "params": { 00:11:17.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.683 "listen_address": { 00:11:17.683 "trtype": "TCP", 00:11:17.683 "adrfam": "IPv4", 00:11:17.683 "traddr": "10.0.0.2", 00:11:17.683 "trsvcid": "4420" 00:11:17.683 }, 00:11:17.683 "secure_channel": true 00:11:17.683 } 00:11:17.683 } 00:11:17.683 ] 00:11:17.683 } 00:11:17.683 ] 00:11:17.683 }' 00:11:17.683 11:14:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:17.683 11:14:59 -- common/autotest_common.sh@10 -- # set +x 00:11:17.683 11:14:59 -- nvmf/common.sh@469 -- # nvmfpid=65182 00:11:17.683 11:14:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:11:17.683 11:14:59 -- nvmf/common.sh@470 -- # waitforlisten 65182 00:11:17.683 11:14:59 -- common/autotest_common.sh@819 -- # '[' -z 65182 ']' 00:11:17.683 11:14:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.683 11:14:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:17.683 11:14:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.683 11:14:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:17.683 11:14:59 -- common/autotest_common.sh@10 -- # set +x 00:11:17.941 [2024-10-13 11:14:59.313731] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:17.941 [2024-10-13 11:14:59.314088] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.941 [2024-10-13 11:14:59.452611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.942 [2024-10-13 11:14:59.501899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:17.942 [2024-10-13 11:14:59.502037] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.942 [2024-10-13 11:14:59.502049] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.942 [2024-10-13 11:14:59.502056] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.942 [2024-10-13 11:14:59.502084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.200 [2024-10-13 11:14:59.681428] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.200 [2024-10-13 11:14:59.713378] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:18.200 [2024-10-13 11:14:59.713644] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.768 11:15:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:18.768 11:15:00 -- common/autotest_common.sh@852 -- # return 0 00:11:18.768 11:15:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:18.768 11:15:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:18.768 11:15:00 -- common/autotest_common.sh@10 -- # set +x 00:11:18.768 11:15:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.768 11:15:00 -- target/tls.sh@216 -- # bdevperf_pid=65214 00:11:18.768 11:15:00 -- target/tls.sh@217 -- # waitforlisten 65214 /var/tmp/bdevperf.sock 00:11:18.768 11:15:00 -- common/autotest_common.sh@819 -- # '[' -z 65214 ']' 00:11:18.768 11:15:00 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:11:18.769 11:15:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:18.769 11:15:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:18.769 11:15:00 -- target/tls.sh@213 -- # echo '{ 00:11:18.769 "subsystems": [ 00:11:18.769 { 00:11:18.769 "subsystem": "iobuf", 00:11:18.769 "config": [ 00:11:18.769 { 00:11:18.769 "method": "iobuf_set_options", 00:11:18.769 "params": { 00:11:18.769 "small_pool_count": 8192, 00:11:18.769 "large_pool_count": 1024, 00:11:18.769 "small_bufsize": 8192, 00:11:18.769 "large_bufsize": 135168 00:11:18.769 } 00:11:18.769 } 00:11:18.769 ] 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "subsystem": "sock", 00:11:18.769 "config": [ 00:11:18.769 { 00:11:18.769 "method": "sock_impl_set_options", 00:11:18.769 "params": { 00:11:18.769 "impl_name": "uring", 00:11:18.769 "recv_buf_size": 2097152, 00:11:18.769 "send_buf_size": 2097152, 00:11:18.769 "enable_recv_pipe": true, 00:11:18.769 "enable_quickack": false, 00:11:18.769 "enable_placement_id": 0, 00:11:18.769 "enable_zerocopy_send_server": false, 00:11:18.769 "enable_zerocopy_send_client": false, 00:11:18.769 "zerocopy_threshold": 0, 00:11:18.769 "tls_version": 0, 00:11:18.769 "enable_ktls": false 00:11:18.769 } 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "method": "sock_impl_set_options", 00:11:18.769 "params": { 00:11:18.769 "impl_name": "posix", 00:11:18.769 "recv_buf_size": 2097152, 00:11:18.769 "send_buf_size": 2097152, 00:11:18.769 "enable_recv_pipe": true, 00:11:18.769 "enable_quickack": false, 00:11:18.769 "enable_placement_id": 0, 00:11:18.769 "enable_zerocopy_send_server": true, 00:11:18.769 "enable_zerocopy_send_client": false, 00:11:18.769 "zerocopy_threshold": 0, 00:11:18.769 "tls_version": 0, 00:11:18.769 "enable_ktls": false 00:11:18.769 } 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "method": "sock_impl_set_options", 00:11:18.769 "params": { 00:11:18.769 "impl_name": "ssl", 00:11:18.769 "recv_buf_size": 4096, 00:11:18.769 "send_buf_size": 4096, 00:11:18.769 "enable_recv_pipe": true, 00:11:18.769 "enable_quickack": false, 00:11:18.769 "enable_placement_id": 0, 00:11:18.769 "enable_zerocopy_send_server": true, 00:11:18.769 "enable_zerocopy_send_client": false, 00:11:18.769 "zerocopy_threshold": 0, 00:11:18.769 "tls_version": 0, 00:11:18.769 "enable_ktls": false 00:11:18.769 } 00:11:18.769 } 00:11:18.769 ] 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "subsystem": "vmd", 00:11:18.769 "config": [] 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "subsystem": "accel", 00:11:18.769 "config": [ 00:11:18.769 { 00:11:18.769 "method": "accel_set_options", 00:11:18.769 "params": { 00:11:18.769 "small_cache_size": 128, 00:11:18.769 "large_cache_size": 16, 00:11:18.769 "task_count": 2048, 00:11:18.769 "sequence_count": 2048, 00:11:18.769 "buf_count": 2048 00:11:18.769 } 00:11:18.769 } 00:11:18.769 ] 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "subsystem": "bdev", 00:11:18.769 "config": [ 00:11:18.769 { 00:11:18.769 "method": "bdev_set_options", 00:11:18.769 "params": { 00:11:18.769 "bdev_io_pool_size": 65535, 00:11:18.769 "bdev_io_cache_size": 256, 00:11:18.769 "bdev_auto_examine": true, 00:11:18.769 "iobuf_small_cache_size": 128, 00:11:18.769 "iobuf_large_cache_size": 16 00:11:18.769 } 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "method": "bdev_raid_set_options", 00:11:18.769 "params": { 00:11:18.769 "process_window_size_kb": 1024 00:11:18.769 } 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "method": "bdev_iscsi_set_options", 00:11:18.769 "params": { 00:11:18.769 "timeout_sec": 30 00:11:18.769 } 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "method": "bdev_nvme_set_options", 00:11:18.769 "params": { 00:11:18.769 "action_on_timeout": "none", 00:11:18.769 "timeout_us": 0, 00:11:18.769 "timeout_admin_us": 0, 00:11:18.769 "keep_alive_timeout_ms": 10000, 00:11:18.769 "transport_retry_count": 4, 00:11:18.769 "arbitration_burst": 0, 00:11:18.769 "low_priority_weight": 0, 00:11:18.769 "medium_priority_weight": 0, 00:11:18.769 "high_priority_weight": 0, 00:11:18.769 "nvme_adminq_poll_period_us": 10000, 00:11:18.769 "nvme_ioq_poll_period_us": 0, 00:11:18.769 "io_queue_requests": 512, 00:11:18.769 "delay_cmd_submit": true, 00:11:18.769 "bdev_retry_count": 3, 00:11:18.769 "transport_ack_timeout": 0, 00:11:18.769 "ctrlr_loss_timeout_sec": 0, 00:11:18.769 "reconnect_delay_sec": 0, 00:11:18.769 "fast_io_fail_timeout_sec": 0, 00:11:18.769 "generate_uuids": false, 00:11:18.769 "transport_tos": 0, 00:11:18.769 "io_path_stat": false, 00:11:18.769 "allow_accel_sequence": false 00:11:18.769 } 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "method": "bdev_nvme_attach_controller", 00:11:18.769 "params": { 00:11:18.769 "name": "TLSTEST", 00:11:18.769 "trtype": "TCP", 00:11:18.769 "adrfam": "IPv4", 00:11:18.769 "traddr": "10.0.0.2", 00:11:18.769 "trsvcid": "4420", 00:11:18.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.769 "prchk_reftag": false, 00:11:18.769 "prchk_guard": false, 00:11:18.769 "ctrlr_loss_timeout_sec": 0, 00:11:18.769 "reconnect_delay_sec": 0, 00:11:18.769 "fast_io_fail_timeout_sec": 0, 00:11:18.769 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:18.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:18.769 "hdgst": false, 00:11:18.769 "ddgst": false 00:11:18.769 } 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "method": "bdev_nvme_set_hotplug", 00:11:18.769 "params": { 00:11:18.769 "period_us": 100000, 00:11:18.769 "enable": false 00:11:18.769 } 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "method": "bdev_wait_for_examine" 00:11:18.769 } 00:11:18.769 ] 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "subsystem": "nbd", 00:11:18.769 "config": [] 00:11:18.769 } 00:11:18.769 ] 00:11:18.769 }' 00:11:18.769 11:15:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:18.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:18.769 11:15:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:18.769 11:15:00 -- common/autotest_common.sh@10 -- # set +x 00:11:18.769 [2024-10-13 11:15:00.341478] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:18.769 [2024-10-13 11:15:00.341755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65214 ] 00:11:19.037 [2024-10-13 11:15:00.476885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.037 [2024-10-13 11:15:00.529253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.321 [2024-10-13 11:15:00.653145] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:19.889 11:15:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:19.889 11:15:01 -- common/autotest_common.sh@852 -- # return 0 00:11:19.889 11:15:01 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:19.889 Running I/O for 10 seconds... 00:11:29.868 00:11:29.868 Latency(us) 00:11:29.868 [2024-10-13T11:15:11.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:29.868 [2024-10-13T11:15:11.470Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:29.868 Verification LBA range: start 0x0 length 0x2000 00:11:29.868 TLSTESTn1 : 10.01 6271.17 24.50 0.00 0.00 20378.17 4587.52 21686.46 00:11:29.868 [2024-10-13T11:15:11.470Z] =================================================================================================================== 00:11:29.868 [2024-10-13T11:15:11.470Z] Total : 6271.17 24.50 0.00 0.00 20378.17 4587.52 21686.46 00:11:29.868 0 00:11:29.868 11:15:11 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:29.868 11:15:11 -- target/tls.sh@223 -- # killprocess 65214 00:11:29.868 11:15:11 -- common/autotest_common.sh@926 -- # '[' -z 65214 ']' 00:11:29.868 11:15:11 -- common/autotest_common.sh@930 -- # kill -0 65214 00:11:29.868 11:15:11 -- common/autotest_common.sh@931 -- # uname 00:11:29.868 11:15:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:29.868 11:15:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65214 00:11:30.127 killing process with pid 65214 00:11:30.127 Received shutdown signal, test time was about 10.000000 seconds 00:11:30.127 00:11:30.127 Latency(us) 00:11:30.127 [2024-10-13T11:15:11.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:30.127 [2024-10-13T11:15:11.729Z] =================================================================================================================== 00:11:30.127 [2024-10-13T11:15:11.729Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:30.127 11:15:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:30.127 11:15:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:30.127 11:15:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65214' 00:11:30.127 11:15:11 -- common/autotest_common.sh@945 -- # kill 65214 00:11:30.127 11:15:11 -- common/autotest_common.sh@950 -- # wait 65214 00:11:30.127 11:15:11 -- target/tls.sh@224 -- # killprocess 65182 00:11:30.127 11:15:11 -- common/autotest_common.sh@926 -- # '[' -z 65182 ']' 00:11:30.127 11:15:11 -- common/autotest_common.sh@930 -- # kill -0 65182 00:11:30.127 11:15:11 -- common/autotest_common.sh@931 -- # uname 00:11:30.127 11:15:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:30.127 11:15:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65182 00:11:30.127 killing process with pid 65182 00:11:30.127 11:15:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:30.127 11:15:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:30.127 11:15:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65182' 00:11:30.127 11:15:11 -- common/autotest_common.sh@945 -- # kill 65182 00:11:30.127 11:15:11 -- common/autotest_common.sh@950 -- # wait 65182 00:11:30.386 11:15:11 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:11:30.386 11:15:11 -- target/tls.sh@227 -- # cleanup 00:11:30.386 11:15:11 -- target/tls.sh@15 -- # process_shm --id 0 00:11:30.386 11:15:11 -- common/autotest_common.sh@796 -- # type=--id 00:11:30.386 11:15:11 -- common/autotest_common.sh@797 -- # id=0 00:11:30.386 11:15:11 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:11:30.386 11:15:11 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:30.386 11:15:11 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:11:30.386 11:15:11 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:11:30.386 11:15:11 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:11:30.386 11:15:11 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:30.386 nvmf_trace.0 00:11:30.386 11:15:11 -- common/autotest_common.sh@811 -- # return 0 00:11:30.386 11:15:11 -- target/tls.sh@16 -- # killprocess 65214 00:11:30.386 11:15:11 -- common/autotest_common.sh@926 -- # '[' -z 65214 ']' 00:11:30.386 Process with pid 65214 is not found 00:11:30.386 11:15:11 -- common/autotest_common.sh@930 -- # kill -0 65214 00:11:30.386 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (65214) - No such process 00:11:30.386 11:15:11 -- common/autotest_common.sh@953 -- # echo 'Process with pid 65214 is not found' 00:11:30.386 11:15:11 -- target/tls.sh@17 -- # nvmftestfini 00:11:30.386 11:15:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:30.386 11:15:11 -- nvmf/common.sh@116 -- # sync 00:11:30.645 11:15:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:30.645 11:15:11 -- nvmf/common.sh@119 -- # set +e 00:11:30.645 11:15:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:30.645 11:15:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:30.645 rmmod nvme_tcp 00:11:30.645 rmmod nvme_fabrics 00:11:30.645 rmmod nvme_keyring 00:11:30.645 11:15:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:30.645 11:15:12 -- nvmf/common.sh@123 -- # set -e 00:11:30.645 11:15:12 -- nvmf/common.sh@124 -- # return 0 00:11:30.645 11:15:12 -- nvmf/common.sh@477 -- # '[' -n 65182 ']' 00:11:30.645 11:15:12 -- nvmf/common.sh@478 -- # killprocess 65182 00:11:30.645 11:15:12 -- common/autotest_common.sh@926 -- # '[' -z 65182 ']' 00:11:30.645 Process with pid 65182 is not found 00:11:30.645 11:15:12 -- common/autotest_common.sh@930 -- # kill -0 65182 00:11:30.645 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (65182) - No such process 00:11:30.645 11:15:12 -- common/autotest_common.sh@953 -- # echo 'Process with pid 65182 is not found' 00:11:30.645 11:15:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:30.645 11:15:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:30.645 11:15:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:30.645 11:15:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.645 11:15:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:30.645 11:15:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.645 11:15:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.645 11:15:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.645 11:15:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:30.645 11:15:12 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:30.645 ************************************ 00:11:30.645 END TEST nvmf_tls 00:11:30.645 ************************************ 00:11:30.645 00:11:30.645 real 1m9.006s 00:11:30.645 user 1m47.880s 00:11:30.645 sys 0m23.595s 00:11:30.645 11:15:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.645 11:15:12 -- common/autotest_common.sh@10 -- # set +x 00:11:30.645 11:15:12 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:11:30.645 11:15:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:30.645 11:15:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:30.645 11:15:12 -- common/autotest_common.sh@10 -- # set +x 00:11:30.645 ************************************ 00:11:30.645 START TEST nvmf_fips 00:11:30.645 ************************************ 00:11:30.645 11:15:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:11:30.645 * Looking for test storage... 00:11:30.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:11:30.645 11:15:12 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:30.645 11:15:12 -- nvmf/common.sh@7 -- # uname -s 00:11:30.645 11:15:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.645 11:15:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.645 11:15:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.645 11:15:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.645 11:15:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.645 11:15:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.646 11:15:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.646 11:15:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.646 11:15:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.646 11:15:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.646 11:15:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:11:30.646 11:15:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:11:30.646 11:15:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.646 11:15:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.646 11:15:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:30.646 11:15:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:30.646 11:15:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.646 11:15:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.646 11:15:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.646 11:15:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.646 11:15:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.646 11:15:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.646 11:15:12 -- paths/export.sh@5 -- # export PATH 00:11:30.646 11:15:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.646 11:15:12 -- nvmf/common.sh@46 -- # : 0 00:11:30.646 11:15:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:30.646 11:15:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:30.646 11:15:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:30.646 11:15:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.646 11:15:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.646 11:15:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:30.646 11:15:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:30.646 11:15:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:30.646 11:15:12 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:30.646 11:15:12 -- fips/fips.sh@89 -- # check_openssl_version 00:11:30.646 11:15:12 -- fips/fips.sh@83 -- # local target=3.0.0 00:11:30.646 11:15:12 -- fips/fips.sh@85 -- # openssl version 00:11:30.646 11:15:12 -- fips/fips.sh@85 -- # awk '{print $2}' 00:11:30.646 11:15:12 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:11:30.646 11:15:12 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:11:30.646 11:15:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:30.646 11:15:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:30.646 11:15:12 -- scripts/common.sh@335 -- # IFS=.-: 00:11:30.646 11:15:12 -- scripts/common.sh@335 -- # read -ra ver1 00:11:30.646 11:15:12 -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.646 11:15:12 -- scripts/common.sh@336 -- # read -ra ver2 00:11:30.646 11:15:12 -- scripts/common.sh@337 -- # local 'op=>=' 00:11:30.646 11:15:12 -- scripts/common.sh@339 -- # ver1_l=3 00:11:30.646 11:15:12 -- scripts/common.sh@340 -- # ver2_l=3 00:11:30.646 11:15:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:30.646 11:15:12 -- scripts/common.sh@343 -- # case "$op" in 00:11:30.646 11:15:12 -- scripts/common.sh@347 -- # : 1 00:11:30.646 11:15:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:30.646 11:15:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.646 11:15:12 -- scripts/common.sh@364 -- # decimal 3 00:11:30.905 11:15:12 -- scripts/common.sh@352 -- # local d=3 00:11:30.905 11:15:12 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:11:30.905 11:15:12 -- scripts/common.sh@354 -- # echo 3 00:11:30.905 11:15:12 -- scripts/common.sh@364 -- # ver1[v]=3 00:11:30.905 11:15:12 -- scripts/common.sh@365 -- # decimal 3 00:11:30.905 11:15:12 -- scripts/common.sh@352 -- # local d=3 00:11:30.905 11:15:12 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:11:30.905 11:15:12 -- scripts/common.sh@354 -- # echo 3 00:11:30.905 11:15:12 -- scripts/common.sh@365 -- # ver2[v]=3 00:11:30.905 11:15:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:30.906 11:15:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:30.906 11:15:12 -- scripts/common.sh@363 -- # (( v++ )) 00:11:30.906 11:15:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.906 11:15:12 -- scripts/common.sh@364 -- # decimal 1 00:11:30.906 11:15:12 -- scripts/common.sh@352 -- # local d=1 00:11:30.906 11:15:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.906 11:15:12 -- scripts/common.sh@354 -- # echo 1 00:11:30.906 11:15:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:30.906 11:15:12 -- scripts/common.sh@365 -- # decimal 0 00:11:30.906 11:15:12 -- scripts/common.sh@352 -- # local d=0 00:11:30.906 11:15:12 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:11:30.906 11:15:12 -- scripts/common.sh@354 -- # echo 0 00:11:30.906 11:15:12 -- scripts/common.sh@365 -- # ver2[v]=0 00:11:30.906 11:15:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:30.906 11:15:12 -- scripts/common.sh@366 -- # return 0 00:11:30.906 11:15:12 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:11:30.906 11:15:12 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:11:30.906 11:15:12 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:11:30.906 11:15:12 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:11:30.906 11:15:12 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:11:30.906 11:15:12 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:11:30.906 11:15:12 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:11:30.906 11:15:12 -- fips/fips.sh@113 -- # build_openssl_config 00:11:30.906 11:15:12 -- fips/fips.sh@37 -- # cat 00:11:30.906 11:15:12 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:11:30.906 11:15:12 -- fips/fips.sh@58 -- # cat - 00:11:30.906 11:15:12 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:11:30.906 11:15:12 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:11:30.906 11:15:12 -- fips/fips.sh@116 -- # mapfile -t providers 00:11:30.906 11:15:12 -- fips/fips.sh@116 -- # openssl list -providers 00:11:30.906 11:15:12 -- fips/fips.sh@116 -- # grep name 00:11:30.906 11:15:12 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:11:30.906 11:15:12 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:11:30.906 11:15:12 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:11:30.906 11:15:12 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:11:30.906 11:15:12 -- common/autotest_common.sh@640 -- # local es=0 00:11:30.906 11:15:12 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:11:30.906 11:15:12 -- fips/fips.sh@127 -- # : 00:11:30.906 11:15:12 -- common/autotest_common.sh@628 -- # local arg=openssl 00:11:30.906 11:15:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:30.906 11:15:12 -- common/autotest_common.sh@632 -- # type -t openssl 00:11:30.906 11:15:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:30.906 11:15:12 -- common/autotest_common.sh@634 -- # type -P openssl 00:11:30.906 11:15:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:30.906 11:15:12 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:11:30.906 11:15:12 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:11:30.906 11:15:12 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:11:30.906 Error setting digest 00:11:30.906 40F29477597F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:11:30.906 40F29477597F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:11:30.906 11:15:12 -- common/autotest_common.sh@643 -- # es=1 00:11:30.906 11:15:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:30.906 11:15:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:30.906 11:15:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:30.906 11:15:12 -- fips/fips.sh@130 -- # nvmftestinit 00:11:30.906 11:15:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:30.906 11:15:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.906 11:15:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:30.906 11:15:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:30.906 11:15:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:30.906 11:15:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.906 11:15:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.906 11:15:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.906 11:15:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:30.906 11:15:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:30.906 11:15:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:30.906 11:15:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:30.906 11:15:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:30.906 11:15:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:30.906 11:15:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.906 11:15:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.906 11:15:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:30.906 11:15:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:30.906 11:15:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:30.906 11:15:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:30.906 11:15:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:30.906 11:15:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.906 11:15:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:30.906 11:15:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:30.906 11:15:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:30.906 11:15:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:30.906 11:15:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:30.906 11:15:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:30.906 Cannot find device "nvmf_tgt_br" 00:11:30.906 11:15:12 -- nvmf/common.sh@154 -- # true 00:11:30.906 11:15:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:30.906 Cannot find device "nvmf_tgt_br2" 00:11:30.906 11:15:12 -- nvmf/common.sh@155 -- # true 00:11:30.906 11:15:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:30.906 11:15:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:30.906 Cannot find device "nvmf_tgt_br" 00:11:30.906 11:15:12 -- nvmf/common.sh@157 -- # true 00:11:30.906 11:15:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:30.906 Cannot find device "nvmf_tgt_br2" 00:11:30.906 11:15:12 -- nvmf/common.sh@158 -- # true 00:11:30.906 11:15:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:30.906 11:15:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:31.165 11:15:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:31.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:31.165 11:15:12 -- nvmf/common.sh@161 -- # true 00:11:31.165 11:15:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:31.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:31.165 11:15:12 -- nvmf/common.sh@162 -- # true 00:11:31.165 11:15:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:31.165 11:15:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:31.165 11:15:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:31.165 11:15:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:31.165 11:15:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:31.165 11:15:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:31.165 11:15:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:31.165 11:15:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:31.165 11:15:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:31.165 11:15:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:31.165 11:15:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:31.165 11:15:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:31.165 11:15:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:31.165 11:15:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:31.165 11:15:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:31.165 11:15:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:31.165 11:15:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:31.165 11:15:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:31.165 11:15:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:31.165 11:15:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:31.165 11:15:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:31.165 11:15:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:31.165 11:15:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:31.165 11:15:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:31.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:11:31.165 00:11:31.165 --- 10.0.0.2 ping statistics --- 00:11:31.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.165 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:11:31.165 11:15:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:31.165 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:31.165 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:11:31.165 00:11:31.165 --- 10.0.0.3 ping statistics --- 00:11:31.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.165 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:31.165 11:15:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:31.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:11:31.165 00:11:31.165 --- 10.0.0.1 ping statistics --- 00:11:31.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.165 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:11:31.165 11:15:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.165 11:15:12 -- nvmf/common.sh@421 -- # return 0 00:11:31.165 11:15:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:31.165 11:15:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.165 11:15:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:31.165 11:15:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:31.165 11:15:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.165 11:15:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:31.165 11:15:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:31.165 11:15:12 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:11:31.165 11:15:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:31.165 11:15:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:31.165 11:15:12 -- common/autotest_common.sh@10 -- # set +x 00:11:31.165 11:15:12 -- nvmf/common.sh@469 -- # nvmfpid=65569 00:11:31.165 11:15:12 -- nvmf/common.sh@470 -- # waitforlisten 65569 00:11:31.165 11:15:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:31.165 11:15:12 -- common/autotest_common.sh@819 -- # '[' -z 65569 ']' 00:11:31.165 11:15:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.165 11:15:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:31.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.165 11:15:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.165 11:15:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:31.165 11:15:12 -- common/autotest_common.sh@10 -- # set +x 00:11:31.425 [2024-10-13 11:15:12.798141] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:31.425 [2024-10-13 11:15:12.798252] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.425 [2024-10-13 11:15:12.935746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.425 [2024-10-13 11:15:12.983889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:31.425 [2024-10-13 11:15:12.984034] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.425 [2024-10-13 11:15:12.984045] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.425 [2024-10-13 11:15:12.984053] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.425 [2024-10-13 11:15:12.984080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.361 11:15:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:32.361 11:15:13 -- common/autotest_common.sh@852 -- # return 0 00:11:32.362 11:15:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:32.362 11:15:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:32.362 11:15:13 -- common/autotest_common.sh@10 -- # set +x 00:11:32.362 11:15:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.362 11:15:13 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:11:32.362 11:15:13 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:11:32.362 11:15:13 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:32.362 11:15:13 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:11:32.362 11:15:13 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:32.362 11:15:13 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:32.362 11:15:13 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:32.362 11:15:13 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.633 [2024-10-13 11:15:14.100672] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.633 [2024-10-13 11:15:14.116635] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:32.633 [2024-10-13 11:15:14.116808] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.633 malloc0 00:11:32.633 11:15:14 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:32.633 11:15:14 -- fips/fips.sh@147 -- # bdevperf_pid=65603 00:11:32.633 11:15:14 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:32.633 11:15:14 -- fips/fips.sh@148 -- # waitforlisten 65603 /var/tmp/bdevperf.sock 00:11:32.633 11:15:14 -- common/autotest_common.sh@819 -- # '[' -z 65603 ']' 00:11:32.633 11:15:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:32.633 11:15:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:32.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:32.633 11:15:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:32.633 11:15:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:32.633 11:15:14 -- common/autotest_common.sh@10 -- # set +x 00:11:32.904 [2024-10-13 11:15:14.235973] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:32.904 [2024-10-13 11:15:14.236097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65603 ] 00:11:32.904 [2024-10-13 11:15:14.362192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.904 [2024-10-13 11:15:14.418291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.840 11:15:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:33.840 11:15:15 -- common/autotest_common.sh@852 -- # return 0 00:11:33.840 11:15:15 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:33.840 [2024-10-13 11:15:15.380676] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:34.099 TLSTESTn1 00:11:34.099 11:15:15 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:34.099 Running I/O for 10 seconds... 00:11:44.076 00:11:44.076 Latency(us) 00:11:44.076 [2024-10-13T11:15:25.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.076 [2024-10-13T11:15:25.678Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:44.076 Verification LBA range: start 0x0 length 0x2000 00:11:44.077 TLSTESTn1 : 10.02 6137.51 23.97 0.00 0.00 20820.31 4110.89 21328.99 00:11:44.077 [2024-10-13T11:15:25.679Z] =================================================================================================================== 00:11:44.077 [2024-10-13T11:15:25.679Z] Total : 6137.51 23.97 0.00 0.00 20820.31 4110.89 21328.99 00:11:44.077 0 00:11:44.077 11:15:25 -- fips/fips.sh@1 -- # cleanup 00:11:44.077 11:15:25 -- fips/fips.sh@15 -- # process_shm --id 0 00:11:44.077 11:15:25 -- common/autotest_common.sh@796 -- # type=--id 00:11:44.077 11:15:25 -- common/autotest_common.sh@797 -- # id=0 00:11:44.077 11:15:25 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:11:44.077 11:15:25 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:44.077 11:15:25 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:11:44.077 11:15:25 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:11:44.077 11:15:25 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:11:44.077 11:15:25 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:44.077 nvmf_trace.0 00:11:44.077 11:15:25 -- common/autotest_common.sh@811 -- # return 0 00:11:44.077 11:15:25 -- fips/fips.sh@16 -- # killprocess 65603 00:11:44.077 11:15:25 -- common/autotest_common.sh@926 -- # '[' -z 65603 ']' 00:11:44.077 11:15:25 -- common/autotest_common.sh@930 -- # kill -0 65603 00:11:44.077 11:15:25 -- common/autotest_common.sh@931 -- # uname 00:11:44.337 11:15:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:44.337 11:15:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65603 00:11:44.337 11:15:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:44.337 killing process with pid 65603 00:11:44.337 11:15:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:44.337 11:15:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65603' 00:11:44.337 11:15:25 -- common/autotest_common.sh@945 -- # kill 65603 00:11:44.337 Received shutdown signal, test time was about 10.000000 seconds 00:11:44.337 00:11:44.337 Latency(us) 00:11:44.337 [2024-10-13T11:15:25.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.337 [2024-10-13T11:15:25.939Z] =================================================================================================================== 00:11:44.337 [2024-10-13T11:15:25.939Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:44.337 11:15:25 -- common/autotest_common.sh@950 -- # wait 65603 00:11:44.337 11:15:25 -- fips/fips.sh@17 -- # nvmftestfini 00:11:44.337 11:15:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:44.337 11:15:25 -- nvmf/common.sh@116 -- # sync 00:11:44.337 11:15:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:44.337 11:15:25 -- nvmf/common.sh@119 -- # set +e 00:11:44.337 11:15:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:44.337 11:15:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:44.337 rmmod nvme_tcp 00:11:44.596 rmmod nvme_fabrics 00:11:44.596 rmmod nvme_keyring 00:11:44.596 11:15:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:44.596 11:15:25 -- nvmf/common.sh@123 -- # set -e 00:11:44.596 11:15:25 -- nvmf/common.sh@124 -- # return 0 00:11:44.596 11:15:25 -- nvmf/common.sh@477 -- # '[' -n 65569 ']' 00:11:44.596 11:15:25 -- nvmf/common.sh@478 -- # killprocess 65569 00:11:44.596 11:15:25 -- common/autotest_common.sh@926 -- # '[' -z 65569 ']' 00:11:44.596 11:15:25 -- common/autotest_common.sh@930 -- # kill -0 65569 00:11:44.596 11:15:25 -- common/autotest_common.sh@931 -- # uname 00:11:44.596 11:15:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:44.596 11:15:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65569 00:11:44.596 11:15:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:44.596 killing process with pid 65569 00:11:44.596 11:15:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:44.596 11:15:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65569' 00:11:44.596 11:15:26 -- common/autotest_common.sh@945 -- # kill 65569 00:11:44.596 11:15:26 -- common/autotest_common.sh@950 -- # wait 65569 00:11:44.855 11:15:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:44.855 11:15:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:44.855 11:15:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:44.855 11:15:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:44.855 11:15:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:44.855 11:15:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.855 11:15:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:44.855 11:15:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.855 11:15:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:44.855 11:15:26 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:44.855 00:11:44.855 real 0m14.115s 00:11:44.855 user 0m19.341s 00:11:44.855 sys 0m5.606s 00:11:44.855 11:15:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.855 11:15:26 -- common/autotest_common.sh@10 -- # set +x 00:11:44.855 ************************************ 00:11:44.855 END TEST nvmf_fips 00:11:44.855 ************************************ 00:11:44.855 11:15:26 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:11:44.855 11:15:26 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:11:44.855 11:15:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:44.855 11:15:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:44.855 11:15:26 -- common/autotest_common.sh@10 -- # set +x 00:11:44.855 ************************************ 00:11:44.855 START TEST nvmf_fuzz 00:11:44.855 ************************************ 00:11:44.855 11:15:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:11:44.855 * Looking for test storage... 00:11:44.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:44.855 11:15:26 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:44.855 11:15:26 -- nvmf/common.sh@7 -- # uname -s 00:11:44.855 11:15:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.855 11:15:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.855 11:15:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.855 11:15:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.855 11:15:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.855 11:15:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.855 11:15:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.855 11:15:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.855 11:15:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.855 11:15:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.855 11:15:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:11:44.855 11:15:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:11:44.855 11:15:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.855 11:15:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.855 11:15:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:44.855 11:15:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:44.855 11:15:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.855 11:15:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.855 11:15:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.855 11:15:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.855 11:15:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.855 11:15:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.855 11:15:26 -- paths/export.sh@5 -- # export PATH 00:11:44.855 11:15:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.855 11:15:26 -- nvmf/common.sh@46 -- # : 0 00:11:44.855 11:15:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:44.855 11:15:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:44.855 11:15:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:44.855 11:15:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.855 11:15:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.855 11:15:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:44.855 11:15:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:44.855 11:15:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:44.855 11:15:26 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:11:44.855 11:15:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:44.855 11:15:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.855 11:15:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:44.855 11:15:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:44.855 11:15:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:44.855 11:15:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.855 11:15:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:44.855 11:15:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.855 11:15:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:44.855 11:15:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:44.855 11:15:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:44.855 11:15:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:44.855 11:15:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:44.855 11:15:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:44.855 11:15:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.855 11:15:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.855 11:15:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:44.855 11:15:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:44.855 11:15:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:44.855 11:15:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:44.855 11:15:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:44.855 11:15:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.855 11:15:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:44.855 11:15:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:44.855 11:15:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:44.855 11:15:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:44.855 11:15:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:44.855 11:15:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:44.855 Cannot find device "nvmf_tgt_br" 00:11:44.855 11:15:26 -- nvmf/common.sh@154 -- # true 00:11:44.855 11:15:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:44.855 Cannot find device "nvmf_tgt_br2" 00:11:44.855 11:15:26 -- nvmf/common.sh@155 -- # true 00:11:44.855 11:15:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:44.855 11:15:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:44.855 Cannot find device "nvmf_tgt_br" 00:11:44.855 11:15:26 -- nvmf/common.sh@157 -- # true 00:11:44.855 11:15:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:45.113 Cannot find device "nvmf_tgt_br2" 00:11:45.113 11:15:26 -- nvmf/common.sh@158 -- # true 00:11:45.113 11:15:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:45.113 11:15:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:45.113 11:15:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:45.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.113 11:15:26 -- nvmf/common.sh@161 -- # true 00:11:45.113 11:15:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:45.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.113 11:15:26 -- nvmf/common.sh@162 -- # true 00:11:45.113 11:15:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:45.113 11:15:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:45.113 11:15:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:45.113 11:15:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:45.113 11:15:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:45.113 11:15:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:45.113 11:15:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:45.113 11:15:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:45.113 11:15:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:45.113 11:15:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:45.113 11:15:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:45.113 11:15:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:45.113 11:15:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:45.113 11:15:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:45.113 11:15:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:45.113 11:15:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:45.113 11:15:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:45.113 11:15:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:45.113 11:15:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:45.113 11:15:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:45.113 11:15:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:45.113 11:15:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:45.373 11:15:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:45.373 11:15:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:45.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:11:45.373 00:11:45.373 --- 10.0.0.2 ping statistics --- 00:11:45.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.373 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:45.373 11:15:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:45.373 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:45.373 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:11:45.373 00:11:45.373 --- 10.0.0.3 ping statistics --- 00:11:45.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.373 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:45.373 11:15:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:45.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:45.373 00:11:45.373 --- 10.0.0.1 ping statistics --- 00:11:45.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.373 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:45.373 11:15:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.373 11:15:26 -- nvmf/common.sh@421 -- # return 0 00:11:45.373 11:15:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:45.373 11:15:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.373 11:15:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:45.373 11:15:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:45.373 11:15:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.373 11:15:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:45.373 11:15:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:45.373 11:15:26 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:45.373 11:15:26 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=65933 00:11:45.373 11:15:26 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:45.373 11:15:26 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 65933 00:11:45.373 11:15:26 -- common/autotest_common.sh@819 -- # '[' -z 65933 ']' 00:11:45.373 11:15:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.373 11:15:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:45.373 11:15:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.373 11:15:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:45.373 11:15:26 -- common/autotest_common.sh@10 -- # set +x 00:11:46.312 11:15:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:46.312 11:15:27 -- common/autotest_common.sh@852 -- # return 0 00:11:46.312 11:15:27 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:46.312 11:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.312 11:15:27 -- common/autotest_common.sh@10 -- # set +x 00:11:46.312 11:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.312 11:15:27 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:11:46.312 11:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.312 11:15:27 -- common/autotest_common.sh@10 -- # set +x 00:11:46.312 Malloc0 00:11:46.312 11:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.312 11:15:27 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:46.312 11:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.312 11:15:27 -- common/autotest_common.sh@10 -- # set +x 00:11:46.312 11:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.312 11:15:27 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:46.312 11:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.312 11:15:27 -- common/autotest_common.sh@10 -- # set +x 00:11:46.312 11:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.312 11:15:27 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.312 11:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.312 11:15:27 -- common/autotest_common.sh@10 -- # set +x 00:11:46.625 11:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.625 11:15:27 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:11:46.625 11:15:27 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:11:46.883 Shutting down the fuzz application 00:11:46.883 11:15:28 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:11:47.142 Shutting down the fuzz application 00:11:47.142 11:15:28 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.142 11:15:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:47.142 11:15:28 -- common/autotest_common.sh@10 -- # set +x 00:11:47.142 11:15:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:47.142 11:15:28 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:47.142 11:15:28 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:11:47.142 11:15:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:47.142 11:15:28 -- nvmf/common.sh@116 -- # sync 00:11:47.142 11:15:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:47.142 11:15:28 -- nvmf/common.sh@119 -- # set +e 00:11:47.142 11:15:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:47.142 11:15:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:47.142 rmmod nvme_tcp 00:11:47.142 rmmod nvme_fabrics 00:11:47.142 rmmod nvme_keyring 00:11:47.142 11:15:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:47.142 11:15:28 -- nvmf/common.sh@123 -- # set -e 00:11:47.142 11:15:28 -- nvmf/common.sh@124 -- # return 0 00:11:47.142 11:15:28 -- nvmf/common.sh@477 -- # '[' -n 65933 ']' 00:11:47.142 11:15:28 -- nvmf/common.sh@478 -- # killprocess 65933 00:11:47.142 11:15:28 -- common/autotest_common.sh@926 -- # '[' -z 65933 ']' 00:11:47.142 11:15:28 -- common/autotest_common.sh@930 -- # kill -0 65933 00:11:47.142 11:15:28 -- common/autotest_common.sh@931 -- # uname 00:11:47.142 11:15:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:47.142 11:15:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65933 00:11:47.142 11:15:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:47.142 11:15:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:47.142 killing process with pid 65933 00:11:47.142 11:15:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65933' 00:11:47.142 11:15:28 -- common/autotest_common.sh@945 -- # kill 65933 00:11:47.142 11:15:28 -- common/autotest_common.sh@950 -- # wait 65933 00:11:47.400 11:15:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:47.400 11:15:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:47.400 11:15:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:47.400 11:15:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.400 11:15:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:47.400 11:15:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.400 11:15:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.400 11:15:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.400 11:15:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:47.400 11:15:28 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:11:47.400 00:11:47.400 real 0m2.665s 00:11:47.400 user 0m2.932s 00:11:47.400 sys 0m0.550s 00:11:47.400 11:15:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.400 11:15:28 -- common/autotest_common.sh@10 -- # set +x 00:11:47.400 ************************************ 00:11:47.400 END TEST nvmf_fuzz 00:11:47.400 ************************************ 00:11:47.660 11:15:29 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:11:47.660 11:15:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:47.660 11:15:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:47.660 11:15:29 -- common/autotest_common.sh@10 -- # set +x 00:11:47.660 ************************************ 00:11:47.660 START TEST nvmf_multiconnection 00:11:47.660 ************************************ 00:11:47.660 11:15:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:11:47.660 * Looking for test storage... 00:11:47.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:47.660 11:15:29 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:47.660 11:15:29 -- nvmf/common.sh@7 -- # uname -s 00:11:47.660 11:15:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.660 11:15:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.660 11:15:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.660 11:15:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.660 11:15:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.660 11:15:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.660 11:15:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.660 11:15:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.660 11:15:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.660 11:15:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.660 11:15:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:11:47.660 11:15:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:11:47.660 11:15:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.660 11:15:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.660 11:15:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:47.660 11:15:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.660 11:15:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.660 11:15:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.660 11:15:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.660 11:15:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.660 11:15:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.660 11:15:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.660 11:15:29 -- paths/export.sh@5 -- # export PATH 00:11:47.660 11:15:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.660 11:15:29 -- nvmf/common.sh@46 -- # : 0 00:11:47.660 11:15:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:47.660 11:15:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:47.660 11:15:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:47.660 11:15:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.660 11:15:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.660 11:15:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:47.660 11:15:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:47.660 11:15:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:47.660 11:15:29 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:47.660 11:15:29 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:47.660 11:15:29 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:11:47.660 11:15:29 -- target/multiconnection.sh@16 -- # nvmftestinit 00:11:47.660 11:15:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:47.660 11:15:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.660 11:15:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:47.660 11:15:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:47.660 11:15:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:47.660 11:15:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.660 11:15:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.660 11:15:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.660 11:15:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:47.660 11:15:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:47.660 11:15:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:47.660 11:15:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:47.660 11:15:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:47.660 11:15:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:47.660 11:15:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.660 11:15:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.660 11:15:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:47.660 11:15:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:47.660 11:15:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:47.660 11:15:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:47.660 11:15:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:47.660 11:15:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.660 11:15:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:47.660 11:15:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:47.660 11:15:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:47.660 11:15:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:47.660 11:15:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:47.660 11:15:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:47.660 Cannot find device "nvmf_tgt_br" 00:11:47.660 11:15:29 -- nvmf/common.sh@154 -- # true 00:11:47.660 11:15:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:47.660 Cannot find device "nvmf_tgt_br2" 00:11:47.660 11:15:29 -- nvmf/common.sh@155 -- # true 00:11:47.660 11:15:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:47.660 11:15:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:47.660 Cannot find device "nvmf_tgt_br" 00:11:47.660 11:15:29 -- nvmf/common.sh@157 -- # true 00:11:47.660 11:15:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:47.660 Cannot find device "nvmf_tgt_br2" 00:11:47.660 11:15:29 -- nvmf/common.sh@158 -- # true 00:11:47.660 11:15:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:47.660 11:15:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:47.660 11:15:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:47.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.660 11:15:29 -- nvmf/common.sh@161 -- # true 00:11:47.660 11:15:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:47.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.660 11:15:29 -- nvmf/common.sh@162 -- # true 00:11:47.660 11:15:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:47.919 11:15:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:47.919 11:15:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:47.919 11:15:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:47.919 11:15:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:47.919 11:15:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:47.919 11:15:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:47.919 11:15:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:47.920 11:15:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:47.920 11:15:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:47.920 11:15:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:47.920 11:15:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:47.920 11:15:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:47.920 11:15:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:47.920 11:15:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:47.920 11:15:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:47.920 11:15:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:47.920 11:15:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:47.920 11:15:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:47.920 11:15:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:47.920 11:15:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:47.920 11:15:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:47.920 11:15:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:47.920 11:15:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:47.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:11:47.920 00:11:47.920 --- 10.0.0.2 ping statistics --- 00:11:47.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.920 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:47.920 11:15:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:47.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:47.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:11:47.920 00:11:47.920 --- 10.0.0.3 ping statistics --- 00:11:47.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.920 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:47.920 11:15:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:47.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:47.920 00:11:47.920 --- 10.0.0.1 ping statistics --- 00:11:47.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.920 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:47.920 11:15:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.920 11:15:29 -- nvmf/common.sh@421 -- # return 0 00:11:47.920 11:15:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:47.920 11:15:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.920 11:15:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:47.920 11:15:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:47.920 11:15:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.920 11:15:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:47.920 11:15:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:47.920 11:15:29 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:11:47.920 11:15:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:47.920 11:15:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:47.920 11:15:29 -- common/autotest_common.sh@10 -- # set +x 00:11:47.920 11:15:29 -- nvmf/common.sh@469 -- # nvmfpid=66120 00:11:47.920 11:15:29 -- nvmf/common.sh@470 -- # waitforlisten 66120 00:11:47.920 11:15:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.920 11:15:29 -- common/autotest_common.sh@819 -- # '[' -z 66120 ']' 00:11:47.920 11:15:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.920 11:15:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:47.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.920 11:15:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.920 11:15:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:47.920 11:15:29 -- common/autotest_common.sh@10 -- # set +x 00:11:47.920 [2024-10-13 11:15:29.503437] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:47.920 [2024-10-13 11:15:29.503541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.179 [2024-10-13 11:15:29.644446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.179 [2024-10-13 11:15:29.715255] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:48.179 [2024-10-13 11:15:29.715698] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.179 [2024-10-13 11:15:29.715838] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.179 [2024-10-13 11:15:29.715978] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.179 [2024-10-13 11:15:29.716348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.179 [2024-10-13 11:15:29.716443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.179 [2024-10-13 11:15:29.716976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.179 [2024-10-13 11:15:29.717014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.115 11:15:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:49.115 11:15:30 -- common/autotest_common.sh@852 -- # return 0 00:11:49.115 11:15:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:49.115 11:15:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:49.115 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 11:15:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.115 11:15:30 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:49.115 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.115 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 [2024-10-13 11:15:30.563799] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.115 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.115 11:15:30 -- target/multiconnection.sh@21 -- # seq 1 11 00:11:49.115 11:15:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:49.115 11:15:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:49.115 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.115 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 Malloc1 00:11:49.115 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.115 11:15:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:11:49.115 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.115 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.115 11:15:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.115 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.115 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.115 11:15:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.115 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.115 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 [2024-10-13 11:15:30.630138] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.115 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.115 11:15:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:49.115 11:15:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:11:49.115 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.115 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 Malloc2 00:11:49.115 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.115 11:15:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:49.115 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.115 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.115 11:15:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:11:49.115 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.115 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.115 11:15:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:49.115 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.115 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.115 11:15:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:49.115 11:15:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:11:49.115 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.115 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 Malloc3 00:11:49.115 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.115 11:15:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:11:49.115 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.115 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.115 11:15:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:11:49.115 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.115 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.115 11:15:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:49.115 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.115 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.374 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.374 11:15:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:49.374 11:15:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:11:49.374 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.374 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.374 Malloc4 00:11:49.374 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.374 11:15:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:11:49.374 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:49.375 11:15:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 Malloc5 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:49.375 11:15:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 Malloc6 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:49.375 11:15:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 Malloc7 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:49.375 11:15:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 Malloc8 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:49.375 11:15:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 Malloc9 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.375 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.375 11:15:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:11:49.375 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.375 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.634 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.634 11:15:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:49.634 11:15:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:11:49.634 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.634 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.634 Malloc10 00:11:49.634 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.634 11:15:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:11:49.634 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.634 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.634 11:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.634 11:15:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:11:49.634 11:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.634 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.634 11:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.634 11:15:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:11:49.634 11:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.634 11:15:31 -- common/autotest_common.sh@10 -- # set +x 00:11:49.634 11:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.634 11:15:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:49.634 11:15:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:11:49.634 11:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.634 11:15:31 -- common/autotest_common.sh@10 -- # set +x 00:11:49.634 Malloc11 00:11:49.634 11:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.634 11:15:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:11:49.634 11:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.634 11:15:31 -- common/autotest_common.sh@10 -- # set +x 00:11:49.634 11:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.634 11:15:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:11:49.634 11:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.634 11:15:31 -- common/autotest_common.sh@10 -- # set +x 00:11:49.634 11:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.634 11:15:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:11:49.634 11:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.634 11:15:31 -- common/autotest_common.sh@10 -- # set +x 00:11:49.634 11:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.634 11:15:31 -- target/multiconnection.sh@28 -- # seq 1 11 00:11:49.634 11:15:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:49.634 11:15:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.634 11:15:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:11:49.634 11:15:31 -- common/autotest_common.sh@1177 -- # local i=0 00:11:49.634 11:15:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.634 11:15:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:49.634 11:15:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:52.167 11:15:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:52.167 11:15:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:52.167 11:15:33 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:11:52.167 11:15:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:52.167 11:15:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.167 11:15:33 -- common/autotest_common.sh@1187 -- # return 0 00:11:52.167 11:15:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:52.167 11:15:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:11:52.167 11:15:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:11:52.167 11:15:33 -- common/autotest_common.sh@1177 -- # local i=0 00:11:52.167 11:15:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.167 11:15:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:52.167 11:15:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:54.067 11:15:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:54.067 11:15:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:54.067 11:15:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:11:54.067 11:15:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:54.067 11:15:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.067 11:15:35 -- common/autotest_common.sh@1187 -- # return 0 00:11:54.067 11:15:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:54.067 11:15:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:11:54.067 11:15:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:11:54.067 11:15:35 -- common/autotest_common.sh@1177 -- # local i=0 00:11:54.067 11:15:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.067 11:15:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:54.067 11:15:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:55.966 11:15:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:55.966 11:15:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:55.966 11:15:37 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:11:55.966 11:15:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:55.966 11:15:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.966 11:15:37 -- common/autotest_common.sh@1187 -- # return 0 00:11:55.966 11:15:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:55.966 11:15:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:11:56.224 11:15:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:11:56.224 11:15:37 -- common/autotest_common.sh@1177 -- # local i=0 00:11:56.224 11:15:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.224 11:15:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:56.224 11:15:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:58.121 11:15:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:58.121 11:15:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:58.121 11:15:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:11:58.121 11:15:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:58.121 11:15:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.121 11:15:39 -- common/autotest_common.sh@1187 -- # return 0 00:11:58.121 11:15:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:58.121 11:15:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:11:58.380 11:15:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:11:58.380 11:15:39 -- common/autotest_common.sh@1177 -- # local i=0 00:11:58.380 11:15:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.380 11:15:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:58.380 11:15:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:00.280 11:15:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:00.280 11:15:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:00.280 11:15:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:12:00.280 11:15:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:00.281 11:15:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.281 11:15:41 -- common/autotest_common.sh@1187 -- # return 0 00:12:00.281 11:15:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:00.281 11:15:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:12:00.538 11:15:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:12:00.538 11:15:42 -- common/autotest_common.sh@1177 -- # local i=0 00:12:00.538 11:15:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.538 11:15:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:00.538 11:15:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:02.437 11:15:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:02.437 11:15:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:02.437 11:15:44 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:12:02.437 11:15:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:02.437 11:15:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.437 11:15:44 -- common/autotest_common.sh@1187 -- # return 0 00:12:02.437 11:15:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:02.437 11:15:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:12:02.695 11:15:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:12:02.696 11:15:44 -- common/autotest_common.sh@1177 -- # local i=0 00:12:02.696 11:15:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.696 11:15:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:02.696 11:15:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:04.599 11:15:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:04.599 11:15:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:04.599 11:15:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:12:04.599 11:15:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:04.599 11:15:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.599 11:15:46 -- common/autotest_common.sh@1187 -- # return 0 00:12:04.599 11:15:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:04.599 11:15:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:12:04.858 11:15:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:12:04.858 11:15:46 -- common/autotest_common.sh@1177 -- # local i=0 00:12:04.858 11:15:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.858 11:15:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:04.858 11:15:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:06.779 11:15:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:06.779 11:15:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:06.779 11:15:48 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:12:06.779 11:15:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:06.779 11:15:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.779 11:15:48 -- common/autotest_common.sh@1187 -- # return 0 00:12:06.779 11:15:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:06.779 11:15:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:12:07.037 11:15:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:12:07.037 11:15:48 -- common/autotest_common.sh@1177 -- # local i=0 00:12:07.037 11:15:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.037 11:15:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:07.037 11:15:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:08.941 11:15:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:08.941 11:15:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:08.941 11:15:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:12:08.941 11:15:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:08.941 11:15:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.941 11:15:50 -- common/autotest_common.sh@1187 -- # return 0 00:12:08.941 11:15:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:08.941 11:15:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:12:09.200 11:15:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:12:09.200 11:15:50 -- common/autotest_common.sh@1177 -- # local i=0 00:12:09.200 11:15:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.200 11:15:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:09.200 11:15:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:11.105 11:15:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:11.105 11:15:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:11.105 11:15:52 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:12:11.105 11:15:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:11.105 11:15:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.105 11:15:52 -- common/autotest_common.sh@1187 -- # return 0 00:12:11.105 11:15:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.105 11:15:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:12:11.364 11:15:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:12:11.364 11:15:52 -- common/autotest_common.sh@1177 -- # local i=0 00:12:11.364 11:15:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.364 11:15:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:11.364 11:15:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:13.268 11:15:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:13.268 11:15:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:13.268 11:15:54 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:12:13.526 11:15:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:13.526 11:15:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.526 11:15:54 -- common/autotest_common.sh@1187 -- # return 0 00:12:13.526 11:15:54 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:12:13.526 [global] 00:12:13.526 thread=1 00:12:13.526 invalidate=1 00:12:13.526 rw=read 00:12:13.526 time_based=1 00:12:13.526 runtime=10 00:12:13.526 ioengine=libaio 00:12:13.526 direct=1 00:12:13.526 bs=262144 00:12:13.526 iodepth=64 00:12:13.526 norandommap=1 00:12:13.526 numjobs=1 00:12:13.526 00:12:13.526 [job0] 00:12:13.526 filename=/dev/nvme0n1 00:12:13.527 [job1] 00:12:13.527 filename=/dev/nvme10n1 00:12:13.527 [job2] 00:12:13.527 filename=/dev/nvme1n1 00:12:13.527 [job3] 00:12:13.527 filename=/dev/nvme2n1 00:12:13.527 [job4] 00:12:13.527 filename=/dev/nvme3n1 00:12:13.527 [job5] 00:12:13.527 filename=/dev/nvme4n1 00:12:13.527 [job6] 00:12:13.527 filename=/dev/nvme5n1 00:12:13.527 [job7] 00:12:13.527 filename=/dev/nvme6n1 00:12:13.527 [job8] 00:12:13.527 filename=/dev/nvme7n1 00:12:13.527 [job9] 00:12:13.527 filename=/dev/nvme8n1 00:12:13.527 [job10] 00:12:13.527 filename=/dev/nvme9n1 00:12:13.527 Could not set queue depth (nvme0n1) 00:12:13.527 Could not set queue depth (nvme10n1) 00:12:13.527 Could not set queue depth (nvme1n1) 00:12:13.527 Could not set queue depth (nvme2n1) 00:12:13.527 Could not set queue depth (nvme3n1) 00:12:13.527 Could not set queue depth (nvme4n1) 00:12:13.527 Could not set queue depth (nvme5n1) 00:12:13.527 Could not set queue depth (nvme6n1) 00:12:13.527 Could not set queue depth (nvme7n1) 00:12:13.527 Could not set queue depth (nvme8n1) 00:12:13.527 Could not set queue depth (nvme9n1) 00:12:13.785 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:13.786 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:13.786 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:13.786 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:13.786 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:13.786 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:13.786 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:13.786 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:13.786 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:13.786 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:13.786 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:13.786 fio-3.35 00:12:13.786 Starting 11 threads 00:12:26.034 00:12:26.034 job0: (groupid=0, jobs=1): err= 0: pid=66580: Sun Oct 13 11:16:05 2024 00:12:26.034 read: IOPS=694, BW=174MiB/s (182MB/s)(1744MiB/10050msec) 00:12:26.034 slat (usec): min=18, max=28126, avg=1409.07, stdev=3238.21 00:12:26.034 clat (msec): min=9, max=141, avg=90.65, stdev=12.34 00:12:26.034 lat (msec): min=12, max=141, avg=92.06, stdev=12.49 00:12:26.034 clat percentiles (msec): 00:12:26.034 | 1.00th=[ 36], 5.00th=[ 77], 10.00th=[ 81], 20.00th=[ 85], 00:12:26.034 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 94], 00:12:26.034 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 103], 95.00th=[ 106], 00:12:26.034 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 138], 00:12:26.034 | 99.99th=[ 142] 00:12:26.034 bw ( KiB/s): min=166067, max=205312, per=8.52%, avg=176956.15, stdev=7648.33, samples=20 00:12:26.034 iops : min= 648, max= 802, avg=691.20, stdev=29.93, samples=20 00:12:26.034 lat (msec) : 10=0.01%, 20=0.33%, 50=1.96%, 100=83.21%, 250=14.48% 00:12:26.034 cpu : usr=0.38%, sys=2.62%, ctx=1513, majf=0, minf=4097 00:12:26.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:26.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:26.034 issued rwts: total=6976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.034 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:26.034 job1: (groupid=0, jobs=1): err= 0: pid=66581: Sun Oct 13 11:16:05 2024 00:12:26.034 read: IOPS=695, BW=174MiB/s (182MB/s)(1747MiB/10049msec) 00:12:26.035 slat (usec): min=16, max=21837, avg=1426.27, stdev=3164.28 00:12:26.035 clat (msec): min=15, max=134, avg=90.44, stdev=10.85 00:12:26.035 lat (msec): min=17, max=134, avg=91.87, stdev=10.92 00:12:26.035 clat percentiles (msec): 00:12:26.035 | 1.00th=[ 49], 5.00th=[ 73], 10.00th=[ 79], 20.00th=[ 85], 00:12:26.035 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 92], 60.00th=[ 94], 00:12:26.035 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 103], 95.00th=[ 106], 00:12:26.035 | 99.00th=[ 113], 99.50th=[ 114], 99.90th=[ 125], 99.95th=[ 133], 00:12:26.035 | 99.99th=[ 136] 00:12:26.035 bw ( KiB/s): min=169472, max=214444, per=8.54%, avg=177309.95, stdev=9554.80, samples=20 00:12:26.035 iops : min= 662, max= 837, avg=692.55, stdev=37.21, samples=20 00:12:26.035 lat (msec) : 20=0.07%, 50=1.06%, 100=83.53%, 250=15.34% 00:12:26.035 cpu : usr=0.30%, sys=2.77%, ctx=1515, majf=0, minf=4097 00:12:26.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:26.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:26.035 issued rwts: total=6989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.035 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:26.035 job2: (groupid=0, jobs=1): err= 0: pid=66582: Sun Oct 13 11:16:05 2024 00:12:26.035 read: IOPS=694, BW=174MiB/s (182MB/s)(1744MiB/10048msec) 00:12:26.035 slat (usec): min=16, max=25305, avg=1429.29, stdev=3196.46 00:12:26.035 clat (msec): min=20, max=132, avg=90.55, stdev=10.17 00:12:26.035 lat (msec): min=23, max=147, avg=91.98, stdev=10.23 00:12:26.035 clat percentiles (msec): 00:12:26.035 | 1.00th=[ 54], 5.00th=[ 75], 10.00th=[ 81], 20.00th=[ 85], 00:12:26.035 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 93], 00:12:26.035 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 102], 95.00th=[ 105], 00:12:26.035 | 99.00th=[ 110], 99.50th=[ 113], 99.90th=[ 127], 99.95th=[ 133], 00:12:26.035 | 99.99th=[ 133] 00:12:26.035 bw ( KiB/s): min=166578, max=209920, per=8.52%, avg=176981.70, stdev=8662.14, samples=20 00:12:26.035 iops : min= 650, max= 820, avg=691.30, stdev=33.88, samples=20 00:12:26.035 lat (msec) : 50=0.69%, 100=86.00%, 250=13.32% 00:12:26.035 cpu : usr=0.37%, sys=2.30%, ctx=1554, majf=0, minf=4097 00:12:26.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:26.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:26.035 issued rwts: total=6977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.035 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:26.035 job3: (groupid=0, jobs=1): err= 0: pid=66583: Sun Oct 13 11:16:05 2024 00:12:26.035 read: IOPS=675, BW=169MiB/s (177MB/s)(1700MiB/10067msec) 00:12:26.035 slat (usec): min=20, max=44674, avg=1466.96, stdev=3248.23 00:12:26.035 clat (msec): min=22, max=153, avg=93.11, stdev= 9.98 00:12:26.035 lat (msec): min=23, max=159, avg=94.57, stdev=10.03 00:12:26.035 clat percentiles (msec): 00:12:26.035 | 1.00th=[ 73], 5.00th=[ 81], 10.00th=[ 84], 20.00th=[ 87], 00:12:26.035 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 95], 00:12:26.035 | 70.00th=[ 97], 80.00th=[ 100], 90.00th=[ 104], 95.00th=[ 109], 00:12:26.035 | 99.00th=[ 127], 99.50th=[ 140], 99.90th=[ 153], 99.95th=[ 153], 00:12:26.035 | 99.99th=[ 153] 00:12:26.035 bw ( KiB/s): min=132608, max=186368, per=8.31%, avg=172483.10, stdev=10379.97, samples=20 00:12:26.035 iops : min= 518, max= 728, avg=673.50, stdev=40.48, samples=20 00:12:26.035 lat (msec) : 50=0.35%, 100=81.37%, 250=18.28% 00:12:26.035 cpu : usr=0.27%, sys=2.69%, ctx=1508, majf=0, minf=4097 00:12:26.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:26.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:26.035 issued rwts: total=6800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.035 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:26.035 job4: (groupid=0, jobs=1): err= 0: pid=66584: Sun Oct 13 11:16:05 2024 00:12:26.035 read: IOPS=675, BW=169MiB/s (177MB/s)(1698MiB/10062msec) 00:12:26.035 slat (usec): min=19, max=44231, avg=1467.25, stdev=3290.04 00:12:26.035 clat (msec): min=53, max=155, avg=93.22, stdev=10.10 00:12:26.035 lat (msec): min=53, max=167, avg=94.68, stdev=10.16 00:12:26.035 clat percentiles (msec): 00:12:26.035 | 1.00th=[ 71], 5.00th=[ 80], 10.00th=[ 83], 20.00th=[ 86], 00:12:26.035 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 95], 00:12:26.035 | 70.00th=[ 97], 80.00th=[ 101], 90.00th=[ 105], 95.00th=[ 110], 00:12:26.035 | 99.00th=[ 126], 99.50th=[ 132], 99.90th=[ 155], 99.95th=[ 155], 00:12:26.035 | 99.99th=[ 157] 00:12:26.035 bw ( KiB/s): min=143073, max=182272, per=8.30%, avg=172376.15, stdev=7724.32, samples=20 00:12:26.035 iops : min= 558, max= 712, avg=673.10, stdev=30.31, samples=20 00:12:26.035 lat (msec) : 100=79.42%, 250=20.58% 00:12:26.035 cpu : usr=0.36%, sys=2.76%, ctx=1497, majf=0, minf=4097 00:12:26.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:26.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:26.035 issued rwts: total=6793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.035 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:26.035 job5: (groupid=0, jobs=1): err= 0: pid=66585: Sun Oct 13 11:16:05 2024 00:12:26.035 read: IOPS=674, BW=169MiB/s (177MB/s)(1699MiB/10073msec) 00:12:26.035 slat (usec): min=20, max=45114, avg=1468.88, stdev=3251.93 00:12:26.035 clat (msec): min=16, max=159, avg=93.22, stdev= 9.96 00:12:26.035 lat (msec): min=17, max=159, avg=94.69, stdev=10.03 00:12:26.035 clat percentiles (msec): 00:12:26.035 | 1.00th=[ 73], 5.00th=[ 80], 10.00th=[ 83], 20.00th=[ 87], 00:12:26.035 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 95], 00:12:26.035 | 70.00th=[ 97], 80.00th=[ 101], 90.00th=[ 104], 95.00th=[ 108], 00:12:26.035 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 148], 99.95th=[ 150], 00:12:26.035 | 99.99th=[ 161] 00:12:26.035 bw ( KiB/s): min=145186, max=184832, per=8.30%, avg=172335.90, stdev=7676.43, samples=20 00:12:26.035 iops : min= 567, max= 722, avg=673.15, stdev=29.99, samples=20 00:12:26.035 lat (msec) : 20=0.09%, 50=0.16%, 100=79.80%, 250=19.95% 00:12:26.035 cpu : usr=0.38%, sys=2.83%, ctx=1539, majf=0, minf=4098 00:12:26.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:26.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:26.035 issued rwts: total=6796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.035 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:26.035 job6: (groupid=0, jobs=1): err= 0: pid=66586: Sun Oct 13 11:16:05 2024 00:12:26.035 read: IOPS=527, BW=132MiB/s (138MB/s)(1331MiB/10089msec) 00:12:26.035 slat (usec): min=17, max=68446, avg=1872.98, stdev=4172.81 00:12:26.035 clat (msec): min=15, max=214, avg=119.17, stdev=11.61 00:12:26.035 lat (msec): min=19, max=217, avg=121.04, stdev=11.87 00:12:26.035 clat percentiles (msec): 00:12:26.035 | 1.00th=[ 64], 5.00th=[ 110], 10.00th=[ 112], 20.00th=[ 115], 00:12:26.035 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 121], 00:12:26.035 | 70.00th=[ 123], 80.00th=[ 125], 90.00th=[ 129], 95.00th=[ 132], 00:12:26.035 | 99.00th=[ 144], 99.50th=[ 163], 99.90th=[ 201], 99.95th=[ 201], 00:12:26.035 | 99.99th=[ 215] 00:12:26.035 bw ( KiB/s): min=130560, max=140288, per=6.49%, avg=134720.55, stdev=2353.90, samples=20 00:12:26.035 iops : min= 510, max= 548, avg=526.10, stdev= 9.22, samples=20 00:12:26.035 lat (msec) : 20=0.06%, 50=0.49%, 100=1.03%, 250=98.42% 00:12:26.035 cpu : usr=0.23%, sys=2.25%, ctx=1270, majf=0, minf=4097 00:12:26.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:26.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:26.035 issued rwts: total=5325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.035 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:26.035 job7: (groupid=0, jobs=1): err= 0: pid=66587: Sun Oct 13 11:16:05 2024 00:12:26.035 read: IOPS=1929, BW=482MiB/s (506MB/s)(4829MiB/10009msec) 00:12:26.035 slat (usec): min=15, max=14678, avg=512.79, stdev=1080.81 00:12:26.035 clat (usec): min=2034, max=81260, avg=32588.34, stdev=3083.31 00:12:26.035 lat (usec): min=2073, max=81762, avg=33101.13, stdev=3087.73 00:12:26.035 clat percentiles (usec): 00:12:26.035 | 1.00th=[27657], 5.00th=[29492], 10.00th=[30278], 20.00th=[31065], 00:12:26.035 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32375], 60.00th=[32900], 00:12:26.035 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:12:26.035 | 99.00th=[38011], 99.50th=[41157], 99.90th=[76022], 99.95th=[80217], 00:12:26.035 | 99.99th=[81265] 00:12:26.035 bw ( KiB/s): min=415038, max=509434, per=23.74%, avg=492996.80, stdev=19061.11, samples=20 00:12:26.035 iops : min= 1621, max= 1989, avg=1925.45, stdev=74.39, samples=20 00:12:26.035 lat (msec) : 4=0.01%, 10=0.04%, 20=0.14%, 50=99.38%, 100=0.43% 00:12:26.035 cpu : usr=0.67%, sys=5.52%, ctx=3935, majf=0, minf=4097 00:12:26.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:26.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:26.035 issued rwts: total=19317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.035 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:26.035 job8: (groupid=0, jobs=1): err= 0: pid=66588: Sun Oct 13 11:16:05 2024 00:12:26.035 read: IOPS=525, BW=131MiB/s (138MB/s)(1325MiB/10082msec) 00:12:26.035 slat (usec): min=20, max=49296, avg=1884.23, stdev=4147.48 00:12:26.035 clat (msec): min=31, max=203, avg=119.73, stdev= 8.86 00:12:26.035 lat (msec): min=31, max=203, avg=121.62, stdev= 9.09 00:12:26.035 clat percentiles (msec): 00:12:26.035 | 1.00th=[ 104], 5.00th=[ 111], 10.00th=[ 113], 20.00th=[ 115], 00:12:26.035 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 122], 00:12:26.035 | 70.00th=[ 123], 80.00th=[ 125], 90.00th=[ 128], 95.00th=[ 131], 00:12:26.035 | 99.00th=[ 140], 99.50th=[ 159], 99.90th=[ 201], 99.95th=[ 201], 00:12:26.035 | 99.99th=[ 203] 00:12:26.035 bw ( KiB/s): min=121344, max=139264, per=6.46%, avg=134067.30, stdev=4490.04, samples=20 00:12:26.035 iops : min= 474, max= 544, avg=523.65, stdev=17.54, samples=20 00:12:26.035 lat (msec) : 50=0.19%, 100=0.64%, 250=99.17% 00:12:26.035 cpu : usr=0.22%, sys=2.01%, ctx=1222, majf=0, minf=4097 00:12:26.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:26.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:26.035 issued rwts: total=5300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.035 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:26.036 job9: (groupid=0, jobs=1): err= 0: pid=66589: Sun Oct 13 11:16:05 2024 00:12:26.036 read: IOPS=526, BW=132MiB/s (138MB/s)(1328MiB/10086msec) 00:12:26.036 slat (usec): min=19, max=77155, avg=1877.30, stdev=4127.96 00:12:26.036 clat (msec): min=33, max=211, avg=119.44, stdev= 8.95 00:12:26.036 lat (msec): min=34, max=211, avg=121.32, stdev= 9.24 00:12:26.036 clat percentiles (msec): 00:12:26.036 | 1.00th=[ 104], 5.00th=[ 111], 10.00th=[ 113], 20.00th=[ 115], 00:12:26.036 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 121], 00:12:26.036 | 70.00th=[ 123], 80.00th=[ 125], 90.00th=[ 128], 95.00th=[ 131], 00:12:26.036 | 99.00th=[ 138], 99.50th=[ 155], 99.90th=[ 188], 99.95th=[ 201], 00:12:26.036 | 99.99th=[ 211] 00:12:26.036 bw ( KiB/s): min=119022, max=139776, per=6.47%, avg=134371.10, stdev=4360.85, samples=20 00:12:26.036 iops : min= 464, max= 546, avg=524.70, stdev=17.16, samples=20 00:12:26.036 lat (msec) : 50=0.38%, 100=0.24%, 250=99.38% 00:12:26.036 cpu : usr=0.34%, sys=2.07%, ctx=1243, majf=0, minf=4097 00:12:26.036 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:26.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:26.036 issued rwts: total=5313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.036 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:26.036 job10: (groupid=0, jobs=1): err= 0: pid=66590: Sun Oct 13 11:16:05 2024 00:12:26.036 read: IOPS=524, BW=131MiB/s (137MB/s)(1323MiB/10094msec) 00:12:26.036 slat (usec): min=20, max=50464, avg=1889.64, stdev=4231.56 00:12:26.036 clat (msec): min=19, max=197, avg=120.03, stdev= 9.13 00:12:26.036 lat (msec): min=20, max=197, avg=121.92, stdev= 9.42 00:12:26.036 clat percentiles (msec): 00:12:26.036 | 1.00th=[ 96], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 115], 00:12:26.036 | 30.00th=[ 117], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 122], 00:12:26.036 | 70.00th=[ 124], 80.00th=[ 126], 90.00th=[ 129], 95.00th=[ 132], 00:12:26.036 | 99.00th=[ 138], 99.50th=[ 155], 99.90th=[ 190], 99.95th=[ 199], 00:12:26.036 | 99.99th=[ 199] 00:12:26.036 bw ( KiB/s): min=126204, max=137216, per=6.44%, avg=133825.40, stdev=2628.57, samples=20 00:12:26.036 iops : min= 492, max= 536, avg=522.60, stdev=10.49, samples=20 00:12:26.036 lat (msec) : 20=0.02%, 50=0.09%, 100=1.53%, 250=98.36% 00:12:26.036 cpu : usr=0.22%, sys=2.29%, ctx=1235, majf=0, minf=4097 00:12:26.036 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:26.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:26.036 issued rwts: total=5290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.036 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:26.036 00:12:26.036 Run status group 0 (all jobs): 00:12:26.036 READ: bw=2028MiB/s (2126MB/s), 131MiB/s-482MiB/s (137MB/s-506MB/s), io=20.0GiB (21.5GB), run=10009-10094msec 00:12:26.036 00:12:26.036 Disk stats (read/write): 00:12:26.036 nvme0n1: ios=13847/0, merge=0/0, ticks=1235267/0, in_queue=1235267, util=97.90% 00:12:26.036 nvme10n1: ios=13874/0, merge=0/0, ticks=1236695/0, in_queue=1236695, util=97.88% 00:12:26.036 nvme1n1: ios=13850/0, merge=0/0, ticks=1235233/0, in_queue=1235233, util=98.17% 00:12:26.036 nvme2n1: ios=13475/0, merge=0/0, ticks=1232622/0, in_queue=1232622, util=98.17% 00:12:26.036 nvme3n1: ios=13473/0, merge=0/0, ticks=1233551/0, in_queue=1233551, util=98.18% 00:12:26.036 nvme4n1: ios=13471/0, merge=0/0, ticks=1234976/0, in_queue=1234976, util=98.53% 00:12:26.036 nvme5n1: ios=10534/0, merge=0/0, ticks=1227784/0, in_queue=1227784, util=98.55% 00:12:26.036 nvme6n1: ios=38536/0, merge=0/0, ticks=1242077/0, in_queue=1242077, util=98.59% 00:12:26.036 nvme7n1: ios=10475/0, merge=0/0, ticks=1226814/0, in_queue=1226814, util=98.84% 00:12:26.036 nvme8n1: ios=10502/0, merge=0/0, ticks=1229335/0, in_queue=1229335, util=98.93% 00:12:26.036 nvme9n1: ios=10453/0, merge=0/0, ticks=1229283/0, in_queue=1229283, util=99.16% 00:12:26.036 11:16:05 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:12:26.036 [global] 00:12:26.036 thread=1 00:12:26.036 invalidate=1 00:12:26.036 rw=randwrite 00:12:26.036 time_based=1 00:12:26.036 runtime=10 00:12:26.036 ioengine=libaio 00:12:26.036 direct=1 00:12:26.036 bs=262144 00:12:26.036 iodepth=64 00:12:26.036 norandommap=1 00:12:26.036 numjobs=1 00:12:26.036 00:12:26.036 [job0] 00:12:26.036 filename=/dev/nvme0n1 00:12:26.036 [job1] 00:12:26.036 filename=/dev/nvme10n1 00:12:26.036 [job2] 00:12:26.036 filename=/dev/nvme1n1 00:12:26.036 [job3] 00:12:26.036 filename=/dev/nvme2n1 00:12:26.036 [job4] 00:12:26.036 filename=/dev/nvme3n1 00:12:26.036 [job5] 00:12:26.036 filename=/dev/nvme4n1 00:12:26.036 [job6] 00:12:26.036 filename=/dev/nvme5n1 00:12:26.036 [job7] 00:12:26.036 filename=/dev/nvme6n1 00:12:26.036 [job8] 00:12:26.036 filename=/dev/nvme7n1 00:12:26.036 [job9] 00:12:26.036 filename=/dev/nvme8n1 00:12:26.036 [job10] 00:12:26.036 filename=/dev/nvme9n1 00:12:26.036 Could not set queue depth (nvme0n1) 00:12:26.036 Could not set queue depth (nvme10n1) 00:12:26.036 Could not set queue depth (nvme1n1) 00:12:26.036 Could not set queue depth (nvme2n1) 00:12:26.036 Could not set queue depth (nvme3n1) 00:12:26.036 Could not set queue depth (nvme4n1) 00:12:26.036 Could not set queue depth (nvme5n1) 00:12:26.036 Could not set queue depth (nvme6n1) 00:12:26.036 Could not set queue depth (nvme7n1) 00:12:26.036 Could not set queue depth (nvme8n1) 00:12:26.036 Could not set queue depth (nvme9n1) 00:12:26.036 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:26.036 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:26.036 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:26.036 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:26.036 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:26.036 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:26.036 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:26.036 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:26.036 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:26.036 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:26.036 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:26.036 fio-3.35 00:12:26.036 Starting 11 threads 00:12:36.016 00:12:36.016 job0: (groupid=0, jobs=1): err= 0: pid=66797: Sun Oct 13 11:16:16 2024 00:12:36.016 write: IOPS=339, BW=84.9MiB/s (89.1MB/s)(863MiB/10164msec); 0 zone resets 00:12:36.016 slat (usec): min=16, max=58845, avg=2890.19, stdev=5080.08 00:12:36.016 clat (msec): min=60, max=351, avg=185.42, stdev=20.48 00:12:36.016 lat (msec): min=60, max=351, avg=188.31, stdev=20.16 00:12:36.016 clat percentiles (msec): 00:12:36.016 | 1.00th=[ 107], 5.00th=[ 153], 10.00th=[ 176], 20.00th=[ 180], 00:12:36.016 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 190], 00:12:36.016 | 70.00th=[ 192], 80.00th=[ 192], 90.00th=[ 194], 95.00th=[ 197], 00:12:36.016 | 99.00th=[ 245], 99.50th=[ 305], 99.90th=[ 338], 99.95th=[ 351], 00:12:36.016 | 99.99th=[ 351] 00:12:36.016 bw ( KiB/s): min=83968, max=103118, per=6.00%, avg=86794.30, stdev=4069.81, samples=20 00:12:36.016 iops : min= 328, max= 402, avg=339.00, stdev=15.73, samples=20 00:12:36.016 lat (msec) : 100=0.93%, 250=98.09%, 500=0.98% 00:12:36.016 cpu : usr=0.55%, sys=1.14%, ctx=4258, majf=0, minf=1 00:12:36.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:12:36.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:36.016 issued rwts: total=0,3453,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:36.016 job1: (groupid=0, jobs=1): err= 0: pid=66798: Sun Oct 13 11:16:16 2024 00:12:36.016 write: IOPS=339, BW=85.0MiB/s (89.1MB/s)(864MiB/10168msec); 0 zone resets 00:12:36.016 slat (usec): min=19, max=70560, avg=2889.49, stdev=5129.31 00:12:36.016 clat (msec): min=17, max=347, avg=185.27, stdev=24.55 00:12:36.016 lat (msec): min=17, max=347, avg=188.16, stdev=24.40 00:12:36.016 clat percentiles (msec): 00:12:36.016 | 1.00th=[ 45], 5.00th=[ 159], 10.00th=[ 178], 20.00th=[ 180], 00:12:36.016 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 192], 00:12:36.016 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 197], 95.00th=[ 197], 00:12:36.016 | 99.00th=[ 243], 99.50th=[ 300], 99.90th=[ 338], 99.95th=[ 347], 00:12:36.016 | 99.99th=[ 347] 00:12:36.016 bw ( KiB/s): min=83968, max=106496, per=6.01%, avg=86886.40, stdev=4883.04, samples=20 00:12:36.016 iops : min= 328, max= 416, avg=339.40, stdev=19.07, samples=20 00:12:36.016 lat (msec) : 20=0.12%, 50=1.04%, 100=0.58%, 250=97.28%, 500=0.98% 00:12:36.016 cpu : usr=0.49%, sys=1.09%, ctx=4230, majf=0, minf=1 00:12:36.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:12:36.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:36.016 issued rwts: total=0,3457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:36.016 job2: (groupid=0, jobs=1): err= 0: pid=66810: Sun Oct 13 11:16:16 2024 00:12:36.016 write: IOPS=520, BW=130MiB/s (136MB/s)(1316MiB/10118msec); 0 zone resets 00:12:36.016 slat (usec): min=17, max=13880, avg=1894.71, stdev=3241.74 00:12:36.016 clat (msec): min=5, max=239, avg=121.06, stdev=14.65 00:12:36.016 lat (msec): min=5, max=239, avg=122.95, stdev=14.51 00:12:36.016 clat percentiles (msec): 00:12:36.016 | 1.00th=[ 81], 5.00th=[ 90], 10.00th=[ 116], 20.00th=[ 118], 00:12:36.016 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 126], 00:12:36.016 | 70.00th=[ 127], 80.00th=[ 128], 90.00th=[ 129], 95.00th=[ 129], 00:12:36.016 | 99.00th=[ 138], 99.50th=[ 186], 99.90th=[ 232], 99.95th=[ 232], 00:12:36.016 | 99.99th=[ 241] 00:12:36.016 bw ( KiB/s): min=129024, max=178176, per=9.21%, avg=133184.70, stdev=10845.75, samples=20 00:12:36.016 iops : min= 504, max= 696, avg=520.20, stdev=42.39, samples=20 00:12:36.016 lat (msec) : 10=0.08%, 20=0.08%, 50=0.42%, 100=7.83%, 250=91.60% 00:12:36.016 cpu : usr=1.03%, sys=1.40%, ctx=6168, majf=0, minf=1 00:12:36.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:36.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:36.016 issued rwts: total=0,5264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:36.016 job3: (groupid=0, jobs=1): err= 0: pid=66811: Sun Oct 13 11:16:16 2024 00:12:36.016 write: IOPS=337, BW=84.3MiB/s (88.4MB/s)(858MiB/10170msec); 0 zone resets 00:12:36.016 slat (usec): min=20, max=79381, avg=2911.59, stdev=5208.54 00:12:36.016 clat (msec): min=3, max=353, avg=186.75, stdev=20.18 00:12:36.016 lat (msec): min=3, max=353, avg=189.66, stdev=19.82 00:12:36.016 clat percentiles (msec): 00:12:36.016 | 1.00th=[ 125], 5.00th=[ 157], 10.00th=[ 176], 20.00th=[ 180], 00:12:36.016 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 190], 00:12:36.016 | 70.00th=[ 192], 80.00th=[ 192], 90.00th=[ 197], 95.00th=[ 203], 00:12:36.016 | 99.00th=[ 249], 99.50th=[ 309], 99.90th=[ 342], 99.95th=[ 355], 00:12:36.016 | 99.99th=[ 355] 00:12:36.016 bw ( KiB/s): min=81920, max=95552, per=5.96%, avg=86202.60, stdev=2518.79, samples=20 00:12:36.016 iops : min= 320, max= 373, avg=336.65, stdev= 9.82, samples=20 00:12:36.016 lat (msec) : 4=0.09%, 20=0.12%, 100=0.35%, 250=98.45%, 500=0.99% 00:12:36.016 cpu : usr=0.63%, sys=0.96%, ctx=3407, majf=0, minf=1 00:12:36.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:12:36.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:36.016 issued rwts: total=0,3430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:36.016 job4: (groupid=0, jobs=1): err= 0: pid=66812: Sun Oct 13 11:16:16 2024 00:12:36.016 write: IOPS=520, BW=130MiB/s (136MB/s)(1315MiB/10115msec); 0 zone resets 00:12:36.016 slat (usec): min=18, max=11823, avg=1895.61, stdev=3242.72 00:12:36.016 clat (msec): min=14, max=238, avg=121.13, stdev=14.36 00:12:36.016 lat (msec): min=14, max=238, avg=123.03, stdev=14.21 00:12:36.016 clat percentiles (msec): 00:12:36.016 | 1.00th=[ 84], 5.00th=[ 91], 10.00th=[ 116], 20.00th=[ 118], 00:12:36.017 | 30.00th=[ 121], 40.00th=[ 125], 50.00th=[ 125], 60.00th=[ 126], 00:12:36.017 | 70.00th=[ 127], 80.00th=[ 128], 90.00th=[ 129], 95.00th=[ 129], 00:12:36.017 | 99.00th=[ 136], 99.50th=[ 184], 99.90th=[ 232], 99.95th=[ 232], 00:12:36.017 | 99.99th=[ 239] 00:12:36.017 bw ( KiB/s): min=129024, max=176128, per=9.20%, avg=133069.00, stdev=10395.91, samples=20 00:12:36.017 iops : min= 504, max= 688, avg=519.80, stdev=40.61, samples=20 00:12:36.017 lat (msec) : 20=0.08%, 50=0.46%, 100=7.68%, 250=91.79% 00:12:36.017 cpu : usr=0.97%, sys=1.56%, ctx=6366, majf=0, minf=1 00:12:36.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:36.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:36.017 issued rwts: total=0,5260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:36.017 job5: (groupid=0, jobs=1): err= 0: pid=66813: Sun Oct 13 11:16:16 2024 00:12:36.017 write: IOPS=672, BW=168MiB/s (176MB/s)(1695MiB/10082msec); 0 zone resets 00:12:36.017 slat (usec): min=17, max=34007, avg=1469.09, stdev=2517.30 00:12:36.017 clat (msec): min=17, max=174, avg=93.67, stdev= 9.65 00:12:36.017 lat (msec): min=17, max=174, avg=95.13, stdev= 9.47 00:12:36.017 clat percentiles (msec): 00:12:36.017 | 1.00th=[ 85], 5.00th=[ 87], 10.00th=[ 88], 20.00th=[ 90], 00:12:36.017 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 93], 00:12:36.017 | 70.00th=[ 94], 80.00th=[ 95], 90.00th=[ 96], 95.00th=[ 116], 00:12:36.017 | 99.00th=[ 134], 99.50th=[ 140], 99.90th=[ 163], 99.95th=[ 169], 00:12:36.017 | 99.99th=[ 176] 00:12:36.017 bw ( KiB/s): min=126976, max=178688, per=11.89%, avg=171955.20, stdev=11724.07, samples=20 00:12:36.017 iops : min= 496, max= 698, avg=671.70, stdev=45.80, samples=20 00:12:36.017 lat (msec) : 20=0.06%, 50=0.24%, 100=93.60%, 250=6.11% 00:12:36.017 cpu : usr=1.11%, sys=1.99%, ctx=8331, majf=0, minf=1 00:12:36.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:36.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:36.017 issued rwts: total=0,6780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:36.017 job6: (groupid=0, jobs=1): err= 0: pid=66814: Sun Oct 13 11:16:16 2024 00:12:36.017 write: IOPS=349, BW=87.3MiB/s (91.5MB/s)(888MiB/10175msec); 0 zone resets 00:12:36.017 slat (usec): min=19, max=43249, avg=2779.88, stdev=4920.20 00:12:36.017 clat (msec): min=7, max=353, avg=180.34, stdev=28.23 00:12:36.017 lat (msec): min=7, max=353, avg=183.12, stdev=28.29 00:12:36.017 clat percentiles (msec): 00:12:36.017 | 1.00th=[ 69], 5.00th=[ 126], 10.00th=[ 144], 20.00th=[ 178], 00:12:36.017 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 188], 60.00th=[ 190], 00:12:36.017 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 194], 95.00th=[ 194], 00:12:36.017 | 99.00th=[ 249], 99.50th=[ 309], 99.90th=[ 342], 99.95th=[ 355], 00:12:36.017 | 99.99th=[ 355] 00:12:36.017 bw ( KiB/s): min=84136, max=119808, per=6.18%, avg=89343.80, stdev=9897.01, samples=20 00:12:36.017 iops : min= 328, max= 468, avg=348.85, stdev=38.73, samples=20 00:12:36.017 lat (msec) : 10=0.06%, 20=0.11%, 50=0.45%, 100=1.58%, 250=96.85% 00:12:36.017 lat (msec) : 500=0.96% 00:12:36.017 cpu : usr=0.66%, sys=1.02%, ctx=4891, majf=0, minf=1 00:12:36.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:12:36.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:36.017 issued rwts: total=0,3553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:36.017 job7: (groupid=0, jobs=1): err= 0: pid=66815: Sun Oct 13 11:16:16 2024 00:12:36.017 write: IOPS=1079, BW=270MiB/s (283MB/s)(2713MiB/10052msec); 0 zone resets 00:12:36.017 slat (usec): min=16, max=50231, avg=917.48, stdev=1624.44 00:12:36.017 clat (msec): min=48, max=143, avg=58.36, stdev= 9.10 00:12:36.017 lat (msec): min=49, max=143, avg=59.28, stdev= 9.11 00:12:36.017 clat percentiles (msec): 00:12:36.017 | 1.00th=[ 53], 5.00th=[ 54], 10.00th=[ 54], 20.00th=[ 55], 00:12:36.017 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:12:36.017 | 70.00th=[ 58], 80.00th=[ 59], 90.00th=[ 59], 95.00th=[ 61], 00:12:36.017 | 99.00th=[ 105], 99.50th=[ 113], 99.90th=[ 132], 99.95th=[ 138], 00:12:36.017 | 99.99th=[ 144] 00:12:36.017 bw ( KiB/s): min=147456, max=287744, per=19.09%, avg=276118.65, stdev=31891.89, samples=20 00:12:36.017 iops : min= 576, max= 1124, avg=1078.55, stdev=124.57, samples=20 00:12:36.017 lat (msec) : 50=0.02%, 100=98.08%, 250=1.90% 00:12:36.017 cpu : usr=1.49%, sys=2.27%, ctx=11679, majf=0, minf=1 00:12:36.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:36.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:36.017 issued rwts: total=0,10850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:36.017 job8: (groupid=0, jobs=1): err= 0: pid=66816: Sun Oct 13 11:16:16 2024 00:12:36.017 write: IOPS=673, BW=168MiB/s (176MB/s)(1697MiB/10085msec); 0 zone resets 00:12:36.017 slat (usec): min=18, max=15026, avg=1467.37, stdev=2498.82 00:12:36.017 clat (msec): min=15, max=175, avg=93.58, stdev= 9.25 00:12:36.017 lat (msec): min=15, max=176, avg=95.04, stdev= 9.06 00:12:36.017 clat percentiles (msec): 00:12:36.017 | 1.00th=[ 85], 5.00th=[ 87], 10.00th=[ 88], 20.00th=[ 90], 00:12:36.017 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 94], 00:12:36.017 | 70.00th=[ 94], 80.00th=[ 95], 90.00th=[ 96], 95.00th=[ 112], 00:12:36.017 | 99.00th=[ 131], 99.50th=[ 134], 99.90th=[ 165], 99.95th=[ 171], 00:12:36.017 | 99.99th=[ 176] 00:12:36.017 bw ( KiB/s): min=131072, max=178688, per=11.91%, avg=172220.55, stdev=10939.77, samples=20 00:12:36.017 iops : min= 512, max= 698, avg=672.65, stdev=42.71, samples=20 00:12:36.017 lat (msec) : 20=0.06%, 50=0.24%, 100=93.53%, 250=6.17% 00:12:36.017 cpu : usr=1.04%, sys=2.09%, ctx=8581, majf=0, minf=1 00:12:36.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:36.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:36.017 issued rwts: total=0,6789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:36.017 job9: (groupid=0, jobs=1): err= 0: pid=66817: Sun Oct 13 11:16:16 2024 00:12:36.017 write: IOPS=521, BW=130MiB/s (137MB/s)(1317MiB/10113msec); 0 zone resets 00:12:36.017 slat (usec): min=18, max=9939, avg=1878.68, stdev=3249.97 00:12:36.017 clat (msec): min=12, max=231, avg=120.92, stdev=15.16 00:12:36.017 lat (msec): min=12, max=231, avg=122.80, stdev=15.10 00:12:36.017 clat percentiles (msec): 00:12:36.017 | 1.00th=[ 44], 5.00th=[ 99], 10.00th=[ 115], 20.00th=[ 118], 00:12:36.017 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 126], 00:12:36.017 | 70.00th=[ 127], 80.00th=[ 128], 90.00th=[ 129], 95.00th=[ 129], 00:12:36.017 | 99.00th=[ 131], 99.50th=[ 178], 99.90th=[ 226], 99.95th=[ 226], 00:12:36.017 | 99.99th=[ 232] 00:12:36.017 bw ( KiB/s): min=129024, max=166400, per=9.21%, avg=133273.60, stdev=9509.40, samples=20 00:12:36.017 iops : min= 504, max= 650, avg=520.60, stdev=37.15, samples=20 00:12:36.017 lat (msec) : 20=0.15%, 50=1.10%, 100=5.22%, 250=93.53% 00:12:36.017 cpu : usr=0.89%, sys=1.14%, ctx=7337, majf=0, minf=1 00:12:36.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:36.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:36.017 issued rwts: total=0,5269,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:36.017 job10: (groupid=0, jobs=1): err= 0: pid=66818: Sun Oct 13 11:16:16 2024 00:12:36.017 write: IOPS=333, BW=83.4MiB/s (87.5MB/s)(847MiB/10159msec); 0 zone resets 00:12:36.017 slat (usec): min=17, max=99182, avg=2947.74, stdev=5362.97 00:12:36.017 clat (msec): min=101, max=342, avg=188.83, stdev=17.01 00:12:36.017 lat (msec): min=101, max=342, avg=191.78, stdev=16.43 00:12:36.017 clat percentiles (msec): 00:12:36.017 | 1.00th=[ 142], 5.00th=[ 159], 10.00th=[ 178], 20.00th=[ 182], 00:12:36.017 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 190], 60.00th=[ 192], 00:12:36.017 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 203], 95.00th=[ 207], 00:12:36.017 | 99.00th=[ 249], 99.50th=[ 296], 99.90th=[ 334], 99.95th=[ 342], 00:12:36.017 | 99.99th=[ 342] 00:12:36.017 bw ( KiB/s): min=81920, max=88064, per=5.89%, avg=85145.60, stdev=1661.98, samples=20 00:12:36.017 iops : min= 320, max= 344, avg=332.60, stdev= 6.49, samples=20 00:12:36.017 lat (msec) : 250=99.11%, 500=0.89% 00:12:36.017 cpu : usr=0.45%, sys=0.80%, ctx=3802, majf=0, minf=1 00:12:36.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:12:36.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:36.017 issued rwts: total=0,3389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:36.017 00:12:36.017 Run status group 0 (all jobs): 00:12:36.017 WRITE: bw=1413MiB/s (1481MB/s), 83.4MiB/s-270MiB/s (87.5MB/s-283MB/s), io=14.0GiB (15.1GB), run=10052-10175msec 00:12:36.017 00:12:36.017 Disk stats (read/write): 00:12:36.017 nvme0n1: ios=49/6781, merge=0/0, ticks=52/1212007, in_queue=1212059, util=97.95% 00:12:36.017 nvme10n1: ios=49/6785, merge=0/0, ticks=62/1211726, in_queue=1211788, util=98.07% 00:12:36.017 nvme1n1: ios=45/10410, merge=0/0, ticks=35/1215981, in_queue=1216016, util=98.26% 00:12:36.017 nvme2n1: ios=30/6737, merge=0/0, ticks=56/1212362, in_queue=1212418, util=98.26% 00:12:36.017 nvme3n1: ios=23/10399, merge=0/0, ticks=34/1216058, in_queue=1216092, util=98.20% 00:12:36.017 nvme4n1: ios=0/13439, merge=0/0, ticks=0/1217246, in_queue=1217246, util=98.32% 00:12:36.017 nvme5n1: ios=0/6983, merge=0/0, ticks=0/1213350, in_queue=1213350, util=98.47% 00:12:36.017 nvme6n1: ios=0/21549, merge=0/0, ticks=0/1217347, in_queue=1217347, util=98.37% 00:12:36.017 nvme7n1: ios=0/13467, merge=0/0, ticks=0/1218232, in_queue=1218232, util=98.84% 00:12:36.017 nvme8n1: ios=0/10407, merge=0/0, ticks=0/1215374, in_queue=1215374, util=98.86% 00:12:36.017 nvme9n1: ios=0/6644, merge=0/0, ticks=0/1211194, in_queue=1211194, util=98.84% 00:12:36.017 11:16:16 -- target/multiconnection.sh@36 -- # sync 00:12:36.017 11:16:16 -- target/multiconnection.sh@37 -- # seq 1 11 00:12:36.017 11:16:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.017 11:16:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.017 11:16:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:12:36.017 11:16:16 -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.017 11:16:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:36.017 11:16:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:12:36.017 11:16:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:12:36.017 11:16:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:36.017 11:16:16 -- common/autotest_common.sh@1210 -- # return 0 00:12:36.018 11:16:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.018 11:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.018 11:16:16 -- common/autotest_common.sh@10 -- # set +x 00:12:36.018 11:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.018 11:16:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.018 11:16:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:12:36.018 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:12:36.018 11:16:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:12:36.018 11:16:16 -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.018 11:16:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:36.018 11:16:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:12:36.018 11:16:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:36.018 11:16:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:12:36.018 11:16:16 -- common/autotest_common.sh@1210 -- # return 0 00:12:36.018 11:16:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:36.018 11:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.018 11:16:16 -- common/autotest_common.sh@10 -- # set +x 00:12:36.018 11:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.018 11:16:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.018 11:16:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:12:36.018 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:12:36.018 11:16:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:12:36.018 11:16:16 -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.018 11:16:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:36.018 11:16:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:12:36.018 11:16:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:36.018 11:16:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:12:36.018 11:16:16 -- common/autotest_common.sh@1210 -- # return 0 00:12:36.018 11:16:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:36.018 11:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.018 11:16:16 -- common/autotest_common.sh@10 -- # set +x 00:12:36.018 11:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.018 11:16:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.018 11:16:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:12:36.018 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:12:36.018 11:16:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:12:36.018 11:16:16 -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.018 11:16:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:36.018 11:16:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:12:36.018 11:16:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:36.018 11:16:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:12:36.018 11:16:16 -- common/autotest_common.sh@1210 -- # return 0 00:12:36.018 11:16:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:36.018 11:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.018 11:16:16 -- common/autotest_common.sh@10 -- # set +x 00:12:36.018 11:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.018 11:16:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.018 11:16:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:12:36.018 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:12:36.018 11:16:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:12:36.018 11:16:16 -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.018 11:16:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:12:36.018 11:16:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:36.018 11:16:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:36.018 11:16:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:12:36.018 11:16:16 -- common/autotest_common.sh@1210 -- # return 0 00:12:36.018 11:16:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:12:36.018 11:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.018 11:16:16 -- common/autotest_common.sh@10 -- # set +x 00:12:36.018 11:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.018 11:16:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.018 11:16:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:12:36.018 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:12:36.018 11:16:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:12:36.018 11:16:16 -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.018 11:16:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:36.018 11:16:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:12:36.018 11:16:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:12:36.018 11:16:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:36.018 11:16:16 -- common/autotest_common.sh@1210 -- # return 0 00:12:36.018 11:16:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:12:36.018 11:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.018 11:16:16 -- common/autotest_common.sh@10 -- # set +x 00:12:36.018 11:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.018 11:16:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.018 11:16:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:12:36.018 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:12:36.018 11:16:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:12:36.018 11:16:17 -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.018 11:16:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:36.018 11:16:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:12:36.018 11:16:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:36.018 11:16:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:12:36.018 11:16:17 -- common/autotest_common.sh@1210 -- # return 0 00:12:36.018 11:16:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:12:36.018 11:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.018 11:16:17 -- common/autotest_common.sh@10 -- # set +x 00:12:36.018 11:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.018 11:16:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.018 11:16:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:12:36.018 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:12:36.018 11:16:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:12:36.018 11:16:17 -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.018 11:16:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:36.018 11:16:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:12:36.018 11:16:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:36.018 11:16:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:12:36.018 11:16:17 -- common/autotest_common.sh@1210 -- # return 0 00:12:36.018 11:16:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:12:36.018 11:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.018 11:16:17 -- common/autotest_common.sh@10 -- # set +x 00:12:36.018 11:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.018 11:16:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.018 11:16:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:12:36.018 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:12:36.018 11:16:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:12:36.018 11:16:17 -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.018 11:16:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:36.018 11:16:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:12:36.018 11:16:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:12:36.018 11:16:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:36.018 11:16:17 -- common/autotest_common.sh@1210 -- # return 0 00:12:36.018 11:16:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:12:36.018 11:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.018 11:16:17 -- common/autotest_common.sh@10 -- # set +x 00:12:36.018 11:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.018 11:16:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.018 11:16:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:12:36.018 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:12:36.018 11:16:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:12:36.018 11:16:17 -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.018 11:16:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:36.018 11:16:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:12:36.018 11:16:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:36.018 11:16:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:12:36.018 11:16:17 -- common/autotest_common.sh@1210 -- # return 0 00:12:36.018 11:16:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:12:36.018 11:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.018 11:16:17 -- common/autotest_common.sh@10 -- # set +x 00:12:36.018 11:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.018 11:16:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.018 11:16:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:12:36.018 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:12:36.018 11:16:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:12:36.018 11:16:17 -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.018 11:16:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:36.018 11:16:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:12:36.018 11:16:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:36.018 11:16:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:12:36.018 11:16:17 -- common/autotest_common.sh@1210 -- # return 0 00:12:36.018 11:16:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:12:36.018 11:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.018 11:16:17 -- common/autotest_common.sh@10 -- # set +x 00:12:36.018 11:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.018 11:16:17 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:12:36.018 11:16:17 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:36.018 11:16:17 -- target/multiconnection.sh@47 -- # nvmftestfini 00:12:36.018 11:16:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:36.018 11:16:17 -- nvmf/common.sh@116 -- # sync 00:12:36.018 11:16:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:36.018 11:16:17 -- nvmf/common.sh@119 -- # set +e 00:12:36.018 11:16:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:36.018 11:16:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:36.018 rmmod nvme_tcp 00:12:36.018 rmmod nvme_fabrics 00:12:36.018 rmmod nvme_keyring 00:12:36.018 11:16:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:36.018 11:16:17 -- nvmf/common.sh@123 -- # set -e 00:12:36.018 11:16:17 -- nvmf/common.sh@124 -- # return 0 00:12:36.018 11:16:17 -- nvmf/common.sh@477 -- # '[' -n 66120 ']' 00:12:36.018 11:16:17 -- nvmf/common.sh@478 -- # killprocess 66120 00:12:36.018 11:16:17 -- common/autotest_common.sh@926 -- # '[' -z 66120 ']' 00:12:36.019 11:16:17 -- common/autotest_common.sh@930 -- # kill -0 66120 00:12:36.019 11:16:17 -- common/autotest_common.sh@931 -- # uname 00:12:36.019 11:16:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:36.019 11:16:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66120 00:12:36.019 killing process with pid 66120 00:12:36.019 11:16:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:36.019 11:16:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:36.019 11:16:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66120' 00:12:36.019 11:16:17 -- common/autotest_common.sh@945 -- # kill 66120 00:12:36.019 11:16:17 -- common/autotest_common.sh@950 -- # wait 66120 00:12:36.277 11:16:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:36.277 11:16:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:36.277 11:16:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:36.277 11:16:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.277 11:16:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:36.277 11:16:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.277 11:16:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.277 11:16:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.277 11:16:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:36.277 00:12:36.277 real 0m48.842s 00:12:36.277 user 2m37.240s 00:12:36.277 sys 0m37.263s 00:12:36.277 11:16:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.277 11:16:17 -- common/autotest_common.sh@10 -- # set +x 00:12:36.277 ************************************ 00:12:36.278 END TEST nvmf_multiconnection 00:12:36.278 ************************************ 00:12:36.536 11:16:17 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:12:36.536 11:16:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:36.536 11:16:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:36.536 11:16:17 -- common/autotest_common.sh@10 -- # set +x 00:12:36.536 ************************************ 00:12:36.536 START TEST nvmf_initiator_timeout 00:12:36.536 ************************************ 00:12:36.536 11:16:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:12:36.536 * Looking for test storage... 00:12:36.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:36.536 11:16:17 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:36.536 11:16:17 -- nvmf/common.sh@7 -- # uname -s 00:12:36.536 11:16:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.536 11:16:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.536 11:16:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.536 11:16:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.536 11:16:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.536 11:16:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.536 11:16:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.536 11:16:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.536 11:16:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.536 11:16:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.536 11:16:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:12:36.536 11:16:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:12:36.536 11:16:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.536 11:16:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.536 11:16:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:36.536 11:16:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:36.536 11:16:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.536 11:16:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.536 11:16:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.536 11:16:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.536 11:16:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.536 11:16:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.536 11:16:17 -- paths/export.sh@5 -- # export PATH 00:12:36.536 11:16:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.536 11:16:17 -- nvmf/common.sh@46 -- # : 0 00:12:36.536 11:16:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:36.536 11:16:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:36.536 11:16:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:36.536 11:16:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.536 11:16:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.536 11:16:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:36.536 11:16:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:36.536 11:16:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:36.536 11:16:17 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:36.536 11:16:17 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:36.536 11:16:17 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:12:36.536 11:16:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:36.536 11:16:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.536 11:16:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:36.536 11:16:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:36.536 11:16:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:36.536 11:16:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.536 11:16:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.536 11:16:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.536 11:16:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:36.536 11:16:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:36.536 11:16:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:36.536 11:16:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:36.536 11:16:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:36.536 11:16:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:36.536 11:16:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.536 11:16:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.536 11:16:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:36.536 11:16:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:36.536 11:16:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:36.536 11:16:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:36.536 11:16:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:36.536 11:16:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.536 11:16:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:36.536 11:16:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:36.536 11:16:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:36.536 11:16:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:36.536 11:16:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:36.536 11:16:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:36.536 Cannot find device "nvmf_tgt_br" 00:12:36.536 11:16:18 -- nvmf/common.sh@154 -- # true 00:12:36.536 11:16:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:36.536 Cannot find device "nvmf_tgt_br2" 00:12:36.536 11:16:18 -- nvmf/common.sh@155 -- # true 00:12:36.536 11:16:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:36.536 11:16:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:36.536 Cannot find device "nvmf_tgt_br" 00:12:36.536 11:16:18 -- nvmf/common.sh@157 -- # true 00:12:36.536 11:16:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:36.536 Cannot find device "nvmf_tgt_br2" 00:12:36.536 11:16:18 -- nvmf/common.sh@158 -- # true 00:12:36.536 11:16:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:36.536 11:16:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:36.536 11:16:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:36.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.794 11:16:18 -- nvmf/common.sh@161 -- # true 00:12:36.794 11:16:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:36.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.794 11:16:18 -- nvmf/common.sh@162 -- # true 00:12:36.794 11:16:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:36.794 11:16:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:36.794 11:16:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:36.794 11:16:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:36.794 11:16:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:36.794 11:16:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:36.794 11:16:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:36.794 11:16:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:36.794 11:16:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:36.794 11:16:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:36.794 11:16:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:36.794 11:16:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:36.794 11:16:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:36.794 11:16:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:36.794 11:16:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:36.794 11:16:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:36.794 11:16:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:36.794 11:16:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:36.794 11:16:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:36.794 11:16:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:36.794 11:16:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:36.794 11:16:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:36.794 11:16:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:36.794 11:16:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:36.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:12:36.794 00:12:36.794 --- 10.0.0.2 ping statistics --- 00:12:36.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.794 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:36.794 11:16:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:36.794 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:36.794 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:12:36.794 00:12:36.794 --- 10.0.0.3 ping statistics --- 00:12:36.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.794 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:12:36.794 11:16:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:36.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:12:36.794 00:12:36.794 --- 10.0.0.1 ping statistics --- 00:12:36.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.794 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:36.794 11:16:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.794 11:16:18 -- nvmf/common.sh@421 -- # return 0 00:12:36.794 11:16:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:36.794 11:16:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.794 11:16:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:36.794 11:16:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:36.794 11:16:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.794 11:16:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:36.794 11:16:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:36.794 11:16:18 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:12:36.794 11:16:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:36.794 11:16:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:36.794 11:16:18 -- common/autotest_common.sh@10 -- # set +x 00:12:36.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.794 11:16:18 -- nvmf/common.sh@469 -- # nvmfpid=67181 00:12:36.794 11:16:18 -- nvmf/common.sh@470 -- # waitforlisten 67181 00:12:36.794 11:16:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.794 11:16:18 -- common/autotest_common.sh@819 -- # '[' -z 67181 ']' 00:12:36.794 11:16:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.794 11:16:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:36.794 11:16:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.794 11:16:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:36.794 11:16:18 -- common/autotest_common.sh@10 -- # set +x 00:12:37.053 [2024-10-13 11:16:18.405887] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:37.053 [2024-10-13 11:16:18.405987] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.053 [2024-10-13 11:16:18.549645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.053 [2024-10-13 11:16:18.618908] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:37.053 [2024-10-13 11:16:18.619100] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.053 [2024-10-13 11:16:18.619116] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.053 [2024-10-13 11:16:18.619127] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.053 [2024-10-13 11:16:18.619272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.053 [2024-10-13 11:16:18.619709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.054 [2024-10-13 11:16:18.619793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.054 [2024-10-13 11:16:18.619894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.988 11:16:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:37.988 11:16:19 -- common/autotest_common.sh@852 -- # return 0 00:12:37.988 11:16:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:37.988 11:16:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:37.988 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:12:37.988 11:16:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.988 11:16:19 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:37.988 11:16:19 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:37.988 11:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.988 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:12:37.988 Malloc0 00:12:37.988 11:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.988 11:16:19 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:12:37.988 11:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.988 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:12:37.988 Delay0 00:12:37.988 11:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.988 11:16:19 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.988 11:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.988 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:12:37.988 [2024-10-13 11:16:19.509800] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.988 11:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.988 11:16:19 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:37.988 11:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.988 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:12:37.988 11:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.988 11:16:19 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.988 11:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.988 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:12:37.988 11:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.988 11:16:19 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.988 11:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.988 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:12:37.988 [2024-10-13 11:16:19.537937] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.988 11:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.988 11:16:19 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.297 11:16:19 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.297 11:16:19 -- common/autotest_common.sh@1177 -- # local i=0 00:12:38.297 11:16:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.297 11:16:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:38.297 11:16:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:40.198 11:16:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:40.198 11:16:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:40.198 11:16:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.198 11:16:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:40.198 11:16:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.198 11:16:21 -- common/autotest_common.sh@1187 -- # return 0 00:12:40.198 11:16:21 -- target/initiator_timeout.sh@35 -- # fio_pid=67245 00:12:40.198 11:16:21 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:12:40.198 11:16:21 -- target/initiator_timeout.sh@37 -- # sleep 3 00:12:40.198 [global] 00:12:40.198 thread=1 00:12:40.198 invalidate=1 00:12:40.198 rw=write 00:12:40.198 time_based=1 00:12:40.198 runtime=60 00:12:40.198 ioengine=libaio 00:12:40.198 direct=1 00:12:40.198 bs=4096 00:12:40.198 iodepth=1 00:12:40.198 norandommap=0 00:12:40.198 numjobs=1 00:12:40.198 00:12:40.198 verify_dump=1 00:12:40.198 verify_backlog=512 00:12:40.198 verify_state_save=0 00:12:40.198 do_verify=1 00:12:40.198 verify=crc32c-intel 00:12:40.198 [job0] 00:12:40.198 filename=/dev/nvme0n1 00:12:40.198 Could not set queue depth (nvme0n1) 00:12:40.457 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:40.457 fio-3.35 00:12:40.457 Starting 1 thread 00:12:43.740 11:16:24 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:12:43.740 11:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.740 11:16:24 -- common/autotest_common.sh@10 -- # set +x 00:12:43.740 true 00:12:43.740 11:16:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.740 11:16:24 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:12:43.740 11:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.740 11:16:24 -- common/autotest_common.sh@10 -- # set +x 00:12:43.740 true 00:12:43.740 11:16:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.740 11:16:24 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:12:43.740 11:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.740 11:16:24 -- common/autotest_common.sh@10 -- # set +x 00:12:43.740 true 00:12:43.740 11:16:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.740 11:16:24 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:12:43.740 11:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.740 11:16:24 -- common/autotest_common.sh@10 -- # set +x 00:12:43.740 true 00:12:43.740 11:16:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.740 11:16:24 -- target/initiator_timeout.sh@45 -- # sleep 3 00:12:46.269 11:16:27 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:12:46.269 11:16:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.269 11:16:27 -- common/autotest_common.sh@10 -- # set +x 00:12:46.269 true 00:12:46.269 11:16:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.269 11:16:27 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:12:46.269 11:16:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.269 11:16:27 -- common/autotest_common.sh@10 -- # set +x 00:12:46.269 true 00:12:46.269 11:16:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.269 11:16:27 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:12:46.269 11:16:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.269 11:16:27 -- common/autotest_common.sh@10 -- # set +x 00:12:46.269 true 00:12:46.269 11:16:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.269 11:16:27 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:12:46.269 11:16:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.269 11:16:27 -- common/autotest_common.sh@10 -- # set +x 00:12:46.269 true 00:12:46.269 11:16:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.269 11:16:27 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:12:46.269 11:16:27 -- target/initiator_timeout.sh@54 -- # wait 67245 00:13:42.515 00:13:42.516 job0: (groupid=0, jobs=1): err= 0: pid=67266: Sun Oct 13 11:17:21 2024 00:13:42.516 read: IOPS=802, BW=3209KiB/s (3286kB/s)(188MiB/60000msec) 00:13:42.516 slat (usec): min=9, max=10615, avg=13.69, stdev=60.02 00:13:42.516 clat (usec): min=152, max=40522k, avg=1046.97, stdev=184709.04 00:13:42.516 lat (usec): min=164, max=40522k, avg=1060.66, stdev=184709.05 00:13:42.516 clat percentiles (usec): 00:13:42.516 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 186], 00:13:42.516 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:13:42.516 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 245], 00:13:42.516 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 383], 99.95th=[ 545], 00:13:42.516 | 99.99th=[ 1205] 00:13:42.516 write: IOPS=808, BW=3235KiB/s (3313kB/s)(190MiB/60000msec); 0 zone resets 00:13:42.516 slat (usec): min=12, max=525, avg=20.02, stdev= 6.15 00:13:42.516 clat (usec): min=113, max=1576, avg=161.47, stdev=24.06 00:13:42.516 lat (usec): min=130, max=1593, avg=181.49, stdev=25.15 00:13:42.516 clat percentiles (usec): 00:13:42.516 | 1.00th=[ 124], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 145], 00:13:42.516 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 165], 00:13:42.516 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 200], 00:13:42.516 | 99.00th=[ 223], 99.50th=[ 241], 99.90th=[ 277], 99.95th=[ 330], 00:13:42.516 | 99.99th=[ 652] 00:13:42.516 bw ( KiB/s): min= 4096, max=12168, per=100.00%, avg=9699.51, stdev=1684.33, samples=39 00:13:42.516 iops : min= 1024, max= 3042, avg=2424.87, stdev=421.08, samples=39 00:13:42.516 lat (usec) : 250=98.06%, 500=1.90%, 750=0.03%, 1000=0.01% 00:13:42.516 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:13:42.516 cpu : usr=0.56%, sys=2.06%, ctx=96666, majf=0, minf=5 00:13:42.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:42.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.516 issued rwts: total=48128,48532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:42.516 00:13:42.516 Run status group 0 (all jobs): 00:13:42.516 READ: bw=3209KiB/s (3286kB/s), 3209KiB/s-3209KiB/s (3286kB/s-3286kB/s), io=188MiB (197MB), run=60000-60000msec 00:13:42.516 WRITE: bw=3235KiB/s (3313kB/s), 3235KiB/s-3235KiB/s (3313kB/s-3313kB/s), io=190MiB (199MB), run=60000-60000msec 00:13:42.516 00:13:42.516 Disk stats (read/write): 00:13:42.516 nvme0n1: ios=48263/48128, merge=0/0, ticks=10444/8486, in_queue=18930, util=99.71% 00:13:42.516 11:17:21 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.516 11:17:22 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:42.516 11:17:22 -- common/autotest_common.sh@1198 -- # local i=0 00:13:42.516 11:17:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:42.516 11:17:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.516 11:17:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:42.516 11:17:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.516 nvmf hotplug test: fio successful as expected 00:13:42.516 11:17:22 -- common/autotest_common.sh@1210 -- # return 0 00:13:42.516 11:17:22 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:13:42.516 11:17:22 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:13:42.516 11:17:22 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.516 11:17:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.516 11:17:22 -- common/autotest_common.sh@10 -- # set +x 00:13:42.516 11:17:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.516 11:17:22 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:13:42.516 11:17:22 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:13:42.516 11:17:22 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:13:42.516 11:17:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:42.516 11:17:22 -- nvmf/common.sh@116 -- # sync 00:13:42.516 11:17:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:42.516 11:17:22 -- nvmf/common.sh@119 -- # set +e 00:13:42.516 11:17:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:42.516 11:17:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:42.516 rmmod nvme_tcp 00:13:42.516 rmmod nvme_fabrics 00:13:42.516 rmmod nvme_keyring 00:13:42.516 11:17:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:42.516 11:17:22 -- nvmf/common.sh@123 -- # set -e 00:13:42.516 11:17:22 -- nvmf/common.sh@124 -- # return 0 00:13:42.516 11:17:22 -- nvmf/common.sh@477 -- # '[' -n 67181 ']' 00:13:42.516 11:17:22 -- nvmf/common.sh@478 -- # killprocess 67181 00:13:42.516 11:17:22 -- common/autotest_common.sh@926 -- # '[' -z 67181 ']' 00:13:42.516 11:17:22 -- common/autotest_common.sh@930 -- # kill -0 67181 00:13:42.516 11:17:22 -- common/autotest_common.sh@931 -- # uname 00:13:42.516 11:17:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:42.516 11:17:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67181 00:13:42.516 killing process with pid 67181 00:13:42.516 11:17:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:42.516 11:17:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:42.516 11:17:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67181' 00:13:42.516 11:17:22 -- common/autotest_common.sh@945 -- # kill 67181 00:13:42.516 11:17:22 -- common/autotest_common.sh@950 -- # wait 67181 00:13:42.516 11:17:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:42.516 11:17:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:42.516 11:17:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:42.516 11:17:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.516 11:17:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:42.516 11:17:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.516 11:17:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.516 11:17:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.516 11:17:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:42.516 ************************************ 00:13:42.516 END TEST nvmf_initiator_timeout 00:13:42.516 ************************************ 00:13:42.516 00:13:42.516 real 1m4.493s 00:13:42.516 user 3m53.639s 00:13:42.516 sys 0m21.353s 00:13:42.516 11:17:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.516 11:17:22 -- common/autotest_common.sh@10 -- # set +x 00:13:42.516 11:17:22 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:13:42.516 11:17:22 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:13:42.516 11:17:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:42.516 11:17:22 -- common/autotest_common.sh@10 -- # set +x 00:13:42.516 11:17:22 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:13:42.516 11:17:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:42.516 11:17:22 -- common/autotest_common.sh@10 -- # set +x 00:13:42.516 11:17:22 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:13:42.516 11:17:22 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:42.516 11:17:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:42.516 11:17:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:42.516 11:17:22 -- common/autotest_common.sh@10 -- # set +x 00:13:42.516 ************************************ 00:13:42.516 START TEST nvmf_identify 00:13:42.516 ************************************ 00:13:42.516 11:17:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:42.516 * Looking for test storage... 00:13:42.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:42.516 11:17:22 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:42.516 11:17:22 -- nvmf/common.sh@7 -- # uname -s 00:13:42.516 11:17:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.516 11:17:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.516 11:17:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.516 11:17:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.516 11:17:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.516 11:17:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.516 11:17:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.516 11:17:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.516 11:17:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.516 11:17:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.516 11:17:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:13:42.516 11:17:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:13:42.516 11:17:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.516 11:17:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.516 11:17:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:42.516 11:17:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:42.516 11:17:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.516 11:17:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.516 11:17:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.516 11:17:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.516 11:17:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.516 11:17:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.516 11:17:22 -- paths/export.sh@5 -- # export PATH 00:13:42.516 11:17:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.516 11:17:22 -- nvmf/common.sh@46 -- # : 0 00:13:42.516 11:17:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:42.516 11:17:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:42.516 11:17:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:42.517 11:17:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.517 11:17:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.517 11:17:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:42.517 11:17:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:42.517 11:17:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:42.517 11:17:22 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:42.517 11:17:22 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:42.517 11:17:22 -- host/identify.sh@14 -- # nvmftestinit 00:13:42.517 11:17:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:42.517 11:17:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.517 11:17:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:42.517 11:17:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:42.517 11:17:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:42.517 11:17:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.517 11:17:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.517 11:17:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.517 11:17:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:42.517 11:17:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:42.517 11:17:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:42.517 11:17:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:42.517 11:17:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:42.517 11:17:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:42.517 11:17:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.517 11:17:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.517 11:17:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:42.517 11:17:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:42.517 11:17:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:42.517 11:17:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:42.517 11:17:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:42.517 11:17:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.517 11:17:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:42.517 11:17:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:42.517 11:17:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:42.517 11:17:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:42.517 11:17:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:42.517 11:17:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:42.517 Cannot find device "nvmf_tgt_br" 00:13:42.517 11:17:22 -- nvmf/common.sh@154 -- # true 00:13:42.517 11:17:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:42.517 Cannot find device "nvmf_tgt_br2" 00:13:42.517 11:17:22 -- nvmf/common.sh@155 -- # true 00:13:42.517 11:17:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:42.517 11:17:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:42.517 Cannot find device "nvmf_tgt_br" 00:13:42.517 11:17:22 -- nvmf/common.sh@157 -- # true 00:13:42.517 11:17:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:42.517 Cannot find device "nvmf_tgt_br2" 00:13:42.517 11:17:22 -- nvmf/common.sh@158 -- # true 00:13:42.517 11:17:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:42.517 11:17:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:42.517 11:17:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:42.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.517 11:17:22 -- nvmf/common.sh@161 -- # true 00:13:42.517 11:17:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:42.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.517 11:17:22 -- nvmf/common.sh@162 -- # true 00:13:42.517 11:17:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:42.517 11:17:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:42.517 11:17:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:42.517 11:17:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:42.517 11:17:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:42.517 11:17:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:42.517 11:17:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:42.517 11:17:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:42.517 11:17:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:42.517 11:17:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:42.517 11:17:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:42.517 11:17:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:42.517 11:17:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:42.517 11:17:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:42.517 11:17:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:42.517 11:17:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:42.517 11:17:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:42.517 11:17:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:42.517 11:17:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:42.517 11:17:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:42.517 11:17:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:42.517 11:17:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:42.517 11:17:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:42.517 11:17:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:42.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:13:42.517 00:13:42.517 --- 10.0.0.2 ping statistics --- 00:13:42.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.517 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:13:42.517 11:17:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:42.517 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:42.517 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:13:42.517 00:13:42.517 --- 10.0.0.3 ping statistics --- 00:13:42.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.517 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:42.517 11:17:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:42.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:42.517 00:13:42.517 --- 10.0.0.1 ping statistics --- 00:13:42.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.517 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:42.517 11:17:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.517 11:17:22 -- nvmf/common.sh@421 -- # return 0 00:13:42.517 11:17:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:42.517 11:17:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.517 11:17:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:42.517 11:17:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:42.517 11:17:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.517 11:17:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:42.517 11:17:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:42.517 11:17:22 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:13:42.517 11:17:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:42.517 11:17:22 -- common/autotest_common.sh@10 -- # set +x 00:13:42.517 11:17:22 -- host/identify.sh@19 -- # nvmfpid=68105 00:13:42.517 11:17:22 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:42.517 11:17:22 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:42.517 11:17:22 -- host/identify.sh@23 -- # waitforlisten 68105 00:13:42.517 11:17:22 -- common/autotest_common.sh@819 -- # '[' -z 68105 ']' 00:13:42.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.517 11:17:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.517 11:17:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:42.517 11:17:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.517 11:17:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:42.517 11:17:22 -- common/autotest_common.sh@10 -- # set +x 00:13:42.517 [2024-10-13 11:17:23.009650] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:42.517 [2024-10-13 11:17:23.009968] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.517 [2024-10-13 11:17:23.149976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:42.517 [2024-10-13 11:17:23.201063] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:42.517 [2024-10-13 11:17:23.201208] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.517 [2024-10-13 11:17:23.201220] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.517 [2024-10-13 11:17:23.201228] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.517 [2024-10-13 11:17:23.201311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.517 [2024-10-13 11:17:23.201665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.517 [2024-10-13 11:17:23.202103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.517 [2024-10-13 11:17:23.202112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.517 11:17:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:42.517 11:17:24 -- common/autotest_common.sh@852 -- # return 0 00:13:42.517 11:17:24 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.517 11:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.517 11:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:42.517 [2024-10-13 11:17:24.027921] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.517 11:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.517 11:17:24 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:13:42.517 11:17:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:42.517 11:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:42.517 11:17:24 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:42.517 11:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.517 11:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:42.517 Malloc0 00:13:42.517 11:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.517 11:17:24 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:42.517 11:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.517 11:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:42.778 11:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.778 11:17:24 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:13:42.778 11:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.778 11:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:42.778 11:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.778 11:17:24 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.778 11:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.778 11:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:42.778 [2024-10-13 11:17:24.130128] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.778 11:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.778 11:17:24 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:42.778 11:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.778 11:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:42.778 11:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.778 11:17:24 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:13:42.778 11:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.778 11:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:42.778 [2024-10-13 11:17:24.145909] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:13:42.778 [ 00:13:42.778 { 00:13:42.778 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:42.778 "subtype": "Discovery", 00:13:42.778 "listen_addresses": [ 00:13:42.778 { 00:13:42.778 "transport": "TCP", 00:13:42.778 "trtype": "TCP", 00:13:42.778 "adrfam": "IPv4", 00:13:42.778 "traddr": "10.0.0.2", 00:13:42.778 "trsvcid": "4420" 00:13:42.778 } 00:13:42.778 ], 00:13:42.778 "allow_any_host": true, 00:13:42.778 "hosts": [] 00:13:42.778 }, 00:13:42.778 { 00:13:42.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.778 "subtype": "NVMe", 00:13:42.778 "listen_addresses": [ 00:13:42.778 { 00:13:42.778 "transport": "TCP", 00:13:42.778 "trtype": "TCP", 00:13:42.778 "adrfam": "IPv4", 00:13:42.778 "traddr": "10.0.0.2", 00:13:42.778 "trsvcid": "4420" 00:13:42.778 } 00:13:42.778 ], 00:13:42.778 "allow_any_host": true, 00:13:42.778 "hosts": [], 00:13:42.778 "serial_number": "SPDK00000000000001", 00:13:42.778 "model_number": "SPDK bdev Controller", 00:13:42.778 "max_namespaces": 32, 00:13:42.778 "min_cntlid": 1, 00:13:42.778 "max_cntlid": 65519, 00:13:42.778 "namespaces": [ 00:13:42.778 { 00:13:42.778 "nsid": 1, 00:13:42.778 "bdev_name": "Malloc0", 00:13:42.778 "name": "Malloc0", 00:13:42.778 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:13:42.778 "eui64": "ABCDEF0123456789", 00:13:42.778 "uuid": "a5c897bb-338a-4926-8985-6123c012026e" 00:13:42.778 } 00:13:42.778 ] 00:13:42.778 } 00:13:42.778 ] 00:13:42.778 11:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.778 11:17:24 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:13:42.778 [2024-10-13 11:17:24.185954] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:42.778 [2024-10-13 11:17:24.186165] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68141 ] 00:13:42.778 [2024-10-13 11:17:24.325252] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:13:42.778 [2024-10-13 11:17:24.325329] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:42.778 [2024-10-13 11:17:24.325346] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:42.778 [2024-10-13 11:17:24.325359] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:42.778 [2024-10-13 11:17:24.325373] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:13:42.778 [2024-10-13 11:17:24.325502] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:13:42.779 [2024-10-13 11:17:24.325583] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x176ed30 0 00:13:42.779 [2024-10-13 11:17:24.339368] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:42.779 [2024-10-13 11:17:24.339390] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:42.779 [2024-10-13 11:17:24.339412] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:42.779 [2024-10-13 11:17:24.339415] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:42.779 [2024-10-13 11:17:24.339456] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.339464] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.339468] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x176ed30) 00:13:42.779 [2024-10-13 11:17:24.339482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:42.779 [2024-10-13 11:17:24.339511] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ccf30, cid 0, qid 0 00:13:42.779 [2024-10-13 11:17:24.347366] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.779 [2024-10-13 11:17:24.347386] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.779 [2024-10-13 11:17:24.347407] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.347412] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17ccf30) on tqpair=0x176ed30 00:13:42.779 [2024-10-13 11:17:24.347428] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:42.779 [2024-10-13 11:17:24.347436] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:13:42.779 [2024-10-13 11:17:24.347442] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:13:42.779 [2024-10-13 11:17:24.347457] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.347462] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.347466] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x176ed30) 00:13:42.779 [2024-10-13 11:17:24.347474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.779 [2024-10-13 11:17:24.347500] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ccf30, cid 0, qid 0 00:13:42.779 [2024-10-13 11:17:24.347557] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.779 [2024-10-13 11:17:24.347564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.779 [2024-10-13 11:17:24.347567] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.347571] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17ccf30) on tqpair=0x176ed30 00:13:42.779 [2024-10-13 11:17:24.347578] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:13:42.779 [2024-10-13 11:17:24.347585] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:13:42.779 [2024-10-13 11:17:24.347592] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.347596] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.347599] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x176ed30) 00:13:42.779 [2024-10-13 11:17:24.347607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.779 [2024-10-13 11:17:24.347624] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ccf30, cid 0, qid 0 00:13:42.779 [2024-10-13 11:17:24.347702] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.779 [2024-10-13 11:17:24.347708] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.779 [2024-10-13 11:17:24.347712] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.347716] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17ccf30) on tqpair=0x176ed30 00:13:42.779 [2024-10-13 11:17:24.347724] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:13:42.779 [2024-10-13 11:17:24.347732] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:13:42.779 [2024-10-13 11:17:24.347739] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.347743] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.347747] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x176ed30) 00:13:42.779 [2024-10-13 11:17:24.347754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.779 [2024-10-13 11:17:24.347772] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ccf30, cid 0, qid 0 00:13:42.779 [2024-10-13 11:17:24.347823] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.779 [2024-10-13 11:17:24.347829] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.779 [2024-10-13 11:17:24.347833] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.347837] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17ccf30) on tqpair=0x176ed30 00:13:42.779 [2024-10-13 11:17:24.347843] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:42.779 [2024-10-13 11:17:24.347853] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.347858] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.347862] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x176ed30) 00:13:42.779 [2024-10-13 11:17:24.347869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.779 [2024-10-13 11:17:24.347886] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ccf30, cid 0, qid 0 00:13:42.779 [2024-10-13 11:17:24.347934] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.779 [2024-10-13 11:17:24.347941] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.779 [2024-10-13 11:17:24.347944] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.347948] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17ccf30) on tqpair=0x176ed30 00:13:42.779 [2024-10-13 11:17:24.347954] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:13:42.779 [2024-10-13 11:17:24.347960] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:13:42.779 [2024-10-13 11:17:24.347968] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:42.779 [2024-10-13 11:17:24.348073] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:13:42.779 [2024-10-13 11:17:24.348079] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:42.779 [2024-10-13 11:17:24.348087] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.348092] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.348095] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x176ed30) 00:13:42.779 [2024-10-13 11:17:24.348103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.779 [2024-10-13 11:17:24.348120] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ccf30, cid 0, qid 0 00:13:42.779 [2024-10-13 11:17:24.348179] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.779 [2024-10-13 11:17:24.348186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.779 [2024-10-13 11:17:24.348190] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.348194] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17ccf30) on tqpair=0x176ed30 00:13:42.779 [2024-10-13 11:17:24.348200] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:42.779 [2024-10-13 11:17:24.348209] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.348214] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.348218] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x176ed30) 00:13:42.779 [2024-10-13 11:17:24.348225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.779 [2024-10-13 11:17:24.348242] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ccf30, cid 0, qid 0 00:13:42.779 [2024-10-13 11:17:24.348290] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.779 [2024-10-13 11:17:24.348297] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.779 [2024-10-13 11:17:24.348300] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.348304] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17ccf30) on tqpair=0x176ed30 00:13:42.779 [2024-10-13 11:17:24.348310] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:42.779 [2024-10-13 11:17:24.348315] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:13:42.779 [2024-10-13 11:17:24.348322] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:13:42.779 [2024-10-13 11:17:24.348352] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:13:42.779 [2024-10-13 11:17:24.348363] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.348367] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.348371] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x176ed30) 00:13:42.779 [2024-10-13 11:17:24.348380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.779 [2024-10-13 11:17:24.348415] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ccf30, cid 0, qid 0 00:13:42.779 [2024-10-13 11:17:24.348507] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:42.779 [2024-10-13 11:17:24.348514] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:42.779 [2024-10-13 11:17:24.348518] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.348522] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x176ed30): datao=0, datal=4096, cccid=0 00:13:42.779 [2024-10-13 11:17:24.348527] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17ccf30) on tqpair(0x176ed30): expected_datao=0, payload_size=4096 00:13:42.779 [2024-10-13 11:17:24.348536] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.348541] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.348550] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.779 [2024-10-13 11:17:24.348556] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.779 [2024-10-13 11:17:24.348560] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.779 [2024-10-13 11:17:24.348564] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17ccf30) on tqpair=0x176ed30 00:13:42.779 [2024-10-13 11:17:24.348574] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:13:42.779 [2024-10-13 11:17:24.348579] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:13:42.779 [2024-10-13 11:17:24.348584] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:13:42.779 [2024-10-13 11:17:24.348589] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:13:42.779 [2024-10-13 11:17:24.348594] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:13:42.780 [2024-10-13 11:17:24.348600] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:13:42.780 [2024-10-13 11:17:24.348613] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:13:42.780 [2024-10-13 11:17:24.348621] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.348626] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.348629] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x176ed30) 00:13:42.780 [2024-10-13 11:17:24.348637] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:42.780 [2024-10-13 11:17:24.348657] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ccf30, cid 0, qid 0 00:13:42.780 [2024-10-13 11:17:24.348713] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.780 [2024-10-13 11:17:24.348720] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.780 [2024-10-13 11:17:24.348724] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.348728] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17ccf30) on tqpair=0x176ed30 00:13:42.780 [2024-10-13 11:17:24.348737] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.348755] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.348759] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x176ed30) 00:13:42.780 [2024-10-13 11:17:24.348766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.780 [2024-10-13 11:17:24.348772] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.348776] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.348780] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x176ed30) 00:13:42.780 [2024-10-13 11:17:24.348786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.780 [2024-10-13 11:17:24.348792] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.348797] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.348800] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x176ed30) 00:13:42.780 [2024-10-13 11:17:24.348806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.780 [2024-10-13 11:17:24.348812] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.348816] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.348819] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x176ed30) 00:13:42.780 [2024-10-13 11:17:24.348825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.780 [2024-10-13 11:17:24.348830] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:13:42.780 [2024-10-13 11:17:24.348843] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:42.780 [2024-10-13 11:17:24.348850] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.348854] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.348858] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x176ed30) 00:13:42.780 [2024-10-13 11:17:24.348865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.780 [2024-10-13 11:17:24.348884] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ccf30, cid 0, qid 0 00:13:42.780 [2024-10-13 11:17:24.348891] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd090, cid 1, qid 0 00:13:42.780 [2024-10-13 11:17:24.348896] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd1f0, cid 2, qid 0 00:13:42.780 [2024-10-13 11:17:24.348900] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd350, cid 3, qid 0 00:13:42.780 [2024-10-13 11:17:24.348905] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd4b0, cid 4, qid 0 00:13:42.780 [2024-10-13 11:17:24.348996] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.780 [2024-10-13 11:17:24.349003] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.780 [2024-10-13 11:17:24.349006] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349010] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd4b0) on tqpair=0x176ed30 00:13:42.780 [2024-10-13 11:17:24.349016] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:13:42.780 [2024-10-13 11:17:24.349022] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:13:42.780 [2024-10-13 11:17:24.349033] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349037] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349041] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x176ed30) 00:13:42.780 [2024-10-13 11:17:24.349048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.780 [2024-10-13 11:17:24.349065] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd4b0, cid 4, qid 0 00:13:42.780 [2024-10-13 11:17:24.349125] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:42.780 [2024-10-13 11:17:24.349131] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:42.780 [2024-10-13 11:17:24.349135] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349139] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x176ed30): datao=0, datal=4096, cccid=4 00:13:42.780 [2024-10-13 11:17:24.349143] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17cd4b0) on tqpair(0x176ed30): expected_datao=0, payload_size=4096 00:13:42.780 [2024-10-13 11:17:24.349151] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349155] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349163] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.780 [2024-10-13 11:17:24.349169] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.780 [2024-10-13 11:17:24.349173] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349176] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd4b0) on tqpair=0x176ed30 00:13:42.780 [2024-10-13 11:17:24.349190] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:13:42.780 [2024-10-13 11:17:24.349216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349222] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349226] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x176ed30) 00:13:42.780 [2024-10-13 11:17:24.349233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.780 [2024-10-13 11:17:24.349241] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349244] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349248] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x176ed30) 00:13:42.780 [2024-10-13 11:17:24.349254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.780 [2024-10-13 11:17:24.349277] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd4b0, cid 4, qid 0 00:13:42.780 [2024-10-13 11:17:24.349285] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd610, cid 5, qid 0 00:13:42.780 [2024-10-13 11:17:24.349403] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:42.780 [2024-10-13 11:17:24.349412] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:42.780 [2024-10-13 11:17:24.349415] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349419] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x176ed30): datao=0, datal=1024, cccid=4 00:13:42.780 [2024-10-13 11:17:24.349424] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17cd4b0) on tqpair(0x176ed30): expected_datao=0, payload_size=1024 00:13:42.780 [2024-10-13 11:17:24.349432] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349435] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349441] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.780 [2024-10-13 11:17:24.349447] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.780 [2024-10-13 11:17:24.349451] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349455] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd610) on tqpair=0x176ed30 00:13:42.780 [2024-10-13 11:17:24.349473] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.780 [2024-10-13 11:17:24.349480] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.780 [2024-10-13 11:17:24.349484] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349488] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd4b0) on tqpair=0x176ed30 00:13:42.780 [2024-10-13 11:17:24.349505] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349510] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349514] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x176ed30) 00:13:42.780 [2024-10-13 11:17:24.349522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.780 [2024-10-13 11:17:24.349545] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd4b0, cid 4, qid 0 00:13:42.780 [2024-10-13 11:17:24.349616] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:42.780 [2024-10-13 11:17:24.349622] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:42.780 [2024-10-13 11:17:24.349626] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349630] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x176ed30): datao=0, datal=3072, cccid=4 00:13:42.780 [2024-10-13 11:17:24.349634] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17cd4b0) on tqpair(0x176ed30): expected_datao=0, payload_size=3072 00:13:42.780 [2024-10-13 11:17:24.349642] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349646] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349654] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.780 [2024-10-13 11:17:24.349660] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.780 [2024-10-13 11:17:24.349663] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349667] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd4b0) on tqpair=0x176ed30 00:13:42.780 [2024-10-13 11:17:24.349677] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349682] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.780 [2024-10-13 11:17:24.349685] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x176ed30) 00:13:42.780 [2024-10-13 11:17:24.349692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.780 [2024-10-13 11:17:24.349714] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd4b0, cid 4, qid 0 00:13:42.780 [2024-10-13 11:17:24.349783] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:42.780 [2024-10-13 11:17:24.349790] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:42.780 [2024-10-13 11:17:24.349794] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:42.781 [2024-10-13 11:17:24.349797] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x176ed30): datao=0, datal=8, cccid=4 00:13:42.781 [2024-10-13 11:17:24.349802] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17cd4b0) on tqpair(0x176ed30): expected_datao=0, payload_size=8 00:13:42.781 [2024-10-13 11:17:24.349809] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:42.781 ===================================================== 00:13:42.781 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:42.781 ===================================================== 00:13:42.781 Controller Capabilities/Features 00:13:42.781 ================================ 00:13:42.781 Vendor ID: 0000 00:13:42.781 Subsystem Vendor ID: 0000 00:13:42.781 Serial Number: .................... 00:13:42.781 Model Number: ........................................ 00:13:42.781 Firmware Version: 24.01.1 00:13:42.781 Recommended Arb Burst: 0 00:13:42.781 IEEE OUI Identifier: 00 00 00 00:13:42.781 Multi-path I/O 00:13:42.781 May have multiple subsystem ports: No 00:13:42.781 May have multiple controllers: No 00:13:42.781 Associated with SR-IOV VF: No 00:13:42.781 Max Data Transfer Size: 131072 00:13:42.781 Max Number of Namespaces: 0 00:13:42.781 Max Number of I/O Queues: 1024 00:13:42.781 NVMe Specification Version (VS): 1.3 00:13:42.781 NVMe Specification Version (Identify): 1.3 00:13:42.781 Maximum Queue Entries: 128 00:13:42.781 Contiguous Queues Required: Yes 00:13:42.781 Arbitration Mechanisms Supported 00:13:42.781 Weighted Round Robin: Not Supported 00:13:42.781 Vendor Specific: Not Supported 00:13:42.781 Reset Timeout: 15000 ms 00:13:42.781 Doorbell Stride: 4 bytes 00:13:42.781 NVM Subsystem Reset: Not Supported 00:13:42.781 Command Sets Supported 00:13:42.781 NVM Command Set: Supported 00:13:42.781 Boot Partition: Not Supported 00:13:42.781 Memory Page Size Minimum: 4096 bytes 00:13:42.781 Memory Page Size Maximum: 4096 bytes 00:13:42.781 Persistent Memory Region: Not Supported 00:13:42.781 Optional Asynchronous Events Supported 00:13:42.781 Namespace Attribute Notices: Not Supported 00:13:42.781 Firmware Activation Notices: Not Supported 00:13:42.781 ANA Change Notices: Not Supported 00:13:42.781 PLE Aggregate Log Change Notices: Not Supported 00:13:42.781 LBA Status Info Alert Notices: Not Supported 00:13:42.781 EGE Aggregate Log Change Notices: Not Supported 00:13:42.781 Normal NVM Subsystem Shutdown event: Not Supported 00:13:42.781 Zone Descriptor Change Notices: Not Supported 00:13:42.781 Discovery Log Change Notices: Supported 00:13:42.781 Controller Attributes 00:13:42.781 128-bit Host Identifier: Not Supported 00:13:42.781 Non-Operational Permissive Mode: Not Supported 00:13:42.781 NVM Sets: Not Supported 00:13:42.781 Read Recovery Levels: Not Supported 00:13:42.781 Endurance Groups: Not Supported 00:13:42.781 Predictable Latency Mode: Not Supported 00:13:42.781 Traffic Based Keep ALive: Not Supported 00:13:42.781 Namespace Granularity: Not Supported 00:13:42.781 SQ Associations: Not Supported 00:13:42.781 UUID List: Not Supported 00:13:42.781 Multi-Domain Subsystem: Not Supported 00:13:42.781 Fixed Capacity Management: Not Supported 00:13:42.781 Variable Capacity Management: Not Supported 00:13:42.781 Delete Endurance Group: Not Supported 00:13:42.781 Delete NVM Set: Not Supported 00:13:42.781 Extended LBA Formats Supported: Not Supported 00:13:42.781 Flexible Data Placement Supported: Not Supported 00:13:42.781 00:13:42.781 Controller Memory Buffer Support 00:13:42.781 ================================ 00:13:42.781 Supported: No 00:13:42.781 00:13:42.781 Persistent Memory Region Support 00:13:42.781 ================================ 00:13:42.781 Supported: No 00:13:42.781 00:13:42.781 Admin Command Set Attributes 00:13:42.781 ============================ 00:13:42.781 Security Send/Receive: Not Supported 00:13:42.781 Format NVM: Not Supported 00:13:42.781 Firmware Activate/Download: Not Supported 00:13:42.781 Namespace Management: Not Supported 00:13:42.781 Device Self-Test: Not Supported 00:13:42.781 Directives: Not Supported 00:13:42.781 NVMe-MI: Not Supported 00:13:42.781 Virtualization Management: Not Supported 00:13:42.781 Doorbell Buffer Config: Not Supported 00:13:42.781 Get LBA Status Capability: Not Supported 00:13:42.781 Command & Feature Lockdown Capability: Not Supported 00:13:42.781 Abort Command Limit: 1 00:13:42.781 Async Event Request Limit: 4 00:13:42.781 Number of Firmware Slots: N/A 00:13:42.781 Firmware Slot 1 Read-Only: N/A 00:13:42.781 Firmware Activation Without Reset: N/A 00:13:42.781 Multiple Update Detection Support: N/A 00:13:42.781 Firmware Update Granularity: No Information Provided 00:13:42.781 Per-Namespace SMART Log: No 00:13:42.781 Asymmetric Namespace Access Log Page: Not Supported 00:13:42.781 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:42.781 Command Effects Log Page: Not Supported 00:13:42.781 Get Log Page Extended Data: Supported 00:13:42.781 Telemetry Log Pages: Not Supported 00:13:42.781 Persistent Event Log Pages: Not Supported 00:13:42.781 Supported Log Pages Log Page: May Support 00:13:42.781 Commands Supported & Effects Log Page: Not Supported 00:13:42.781 Feature Identifiers & Effects Log Page:May Support 00:13:42.781 NVMe-MI Commands & Effects Log Page: May Support 00:13:42.781 Data Area 4 for Telemetry Log: Not Supported 00:13:42.781 Error Log Page Entries Supported: 128 00:13:42.781 Keep Alive: Not Supported 00:13:42.781 00:13:42.781 NVM Command Set Attributes 00:13:42.781 ========================== 00:13:42.781 Submission Queue Entry Size 00:13:42.781 Max: 1 00:13:42.781 Min: 1 00:13:42.781 Completion Queue Entry Size 00:13:42.781 Max: 1 00:13:42.781 Min: 1 00:13:42.781 Number of Namespaces: 0 00:13:42.781 Compare Command: Not Supported 00:13:42.781 Write Uncorrectable Command: Not Supported 00:13:42.781 Dataset Management Command: Not Supported 00:13:42.781 Write Zeroes Command: Not Supported 00:13:42.781 Set Features Save Field: Not Supported 00:13:42.781 Reservations: Not Supported 00:13:42.781 Timestamp: Not Supported 00:13:42.781 Copy: Not Supported 00:13:42.781 Volatile Write Cache: Not Present 00:13:42.781 Atomic Write Unit (Normal): 1 00:13:42.781 Atomic Write Unit (PFail): 1 00:13:42.781 Atomic Compare & Write Unit: 1 00:13:42.781 Fused Compare & Write: Supported 00:13:42.781 Scatter-Gather List 00:13:42.781 SGL Command Set: Supported 00:13:42.781 SGL Keyed: Supported 00:13:42.781 SGL Bit Bucket Descriptor: Not Supported 00:13:42.781 SGL Metadata Pointer: Not Supported 00:13:42.781 Oversized SGL: Not Supported 00:13:42.781 SGL Metadata Address: Not Supported 00:13:42.781 SGL Offset: Supported 00:13:42.781 Transport SGL Data Block: Not Supported 00:13:42.781 Replay Protected Memory Block: Not Supported 00:13:42.781 00:13:42.781 Firmware Slot Information 00:13:42.781 ========================= 00:13:42.781 Active slot: 0 00:13:42.781 00:13:42.781 00:13:42.781 Error Log 00:13:42.781 ========= 00:13:42.781 00:13:42.781 Active Namespaces 00:13:42.781 ================= 00:13:42.781 Discovery Log Page 00:13:42.781 ================== 00:13:42.781 Generation Counter: 2 00:13:42.781 Number of Records: 2 00:13:42.781 Record Format: 0 00:13:42.781 00:13:42.781 Discovery Log Entry 0 00:13:42.781 ---------------------- 00:13:42.781 Transport Type: 3 (TCP) 00:13:42.781 Address Family: 1 (IPv4) 00:13:42.781 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:42.781 Entry Flags: 00:13:42.781 Duplicate Returned Information: 1 00:13:42.781 Explicit Persistent Connection Support for Discovery: 1 00:13:42.781 Transport Requirements: 00:13:42.781 Secure Channel: Not Required 00:13:42.781 Port ID: 0 (0x0000) 00:13:42.781 Controller ID: 65535 (0xffff) 00:13:42.781 Admin Max SQ Size: 128 00:13:42.781 Transport Service Identifier: 4420 00:13:42.781 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:42.781 Transport Address: 10.0.0.2 00:13:42.781 Discovery Log Entry 1 00:13:42.781 ---------------------- 00:13:42.781 Transport Type: 3 (TCP) 00:13:42.781 Address Family: 1 (IPv4) 00:13:42.781 Subsystem Type: 2 (NVM Subsystem) 00:13:42.781 Entry Flags: 00:13:42.781 Duplicate Returned Information: 0 00:13:42.781 Explicit Persistent Connection Support for Discovery: 0 00:13:42.781 Transport Requirements: 00:13:42.781 Secure Channel: Not Required 00:13:42.781 Port ID: 0 (0x0000) 00:13:42.781 Controller ID: 65535 (0xffff) 00:13:42.781 Admin Max SQ Size: 128 00:13:42.781 Transport Service Identifier: 4420 00:13:42.781 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:13:42.781 Transport Address: 10.0.0.2 [2024-10-13 11:17:24.349813] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:42.781 [2024-10-13 11:17:24.349827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.781 [2024-10-13 11:17:24.349834] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.781 [2024-10-13 11:17:24.349837] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.781 [2024-10-13 11:17:24.349841] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd4b0) on tqpair=0x176ed30 00:13:42.781 [2024-10-13 11:17:24.349934] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:13:42.781 [2024-10-13 11:17:24.349950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.782 [2024-10-13 11:17:24.349957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.782 [2024-10-13 11:17:24.349964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.782 [2024-10-13 11:17:24.349970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.782 [2024-10-13 11:17:24.349979] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.349983] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.349987] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x176ed30) 00:13:42.782 [2024-10-13 11:17:24.349995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.782 [2024-10-13 11:17:24.350016] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd350, cid 3, qid 0 00:13:42.782 [2024-10-13 11:17:24.350067] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.782 [2024-10-13 11:17:24.350074] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.782 [2024-10-13 11:17:24.350078] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350081] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd350) on tqpair=0x176ed30 00:13:42.782 [2024-10-13 11:17:24.350090] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350095] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350098] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x176ed30) 00:13:42.782 [2024-10-13 11:17:24.350106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.782 [2024-10-13 11:17:24.350127] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd350, cid 3, qid 0 00:13:42.782 [2024-10-13 11:17:24.350196] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.782 [2024-10-13 11:17:24.350202] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.782 [2024-10-13 11:17:24.350206] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350210] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd350) on tqpair=0x176ed30 00:13:42.782 [2024-10-13 11:17:24.350216] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:13:42.782 [2024-10-13 11:17:24.350220] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:13:42.782 [2024-10-13 11:17:24.350230] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350234] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350238] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x176ed30) 00:13:42.782 [2024-10-13 11:17:24.350246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.782 [2024-10-13 11:17:24.350262] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd350, cid 3, qid 0 00:13:42.782 [2024-10-13 11:17:24.350314] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.782 [2024-10-13 11:17:24.350333] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.782 [2024-10-13 11:17:24.350338] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350342] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd350) on tqpair=0x176ed30 00:13:42.782 [2024-10-13 11:17:24.350355] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350360] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350363] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x176ed30) 00:13:42.782 [2024-10-13 11:17:24.350371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.782 [2024-10-13 11:17:24.350390] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd350, cid 3, qid 0 00:13:42.782 [2024-10-13 11:17:24.350436] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.782 [2024-10-13 11:17:24.350443] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.782 [2024-10-13 11:17:24.350447] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350451] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd350) on tqpair=0x176ed30 00:13:42.782 [2024-10-13 11:17:24.350462] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350466] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350470] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x176ed30) 00:13:42.782 [2024-10-13 11:17:24.350477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.782 [2024-10-13 11:17:24.350494] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd350, cid 3, qid 0 00:13:42.782 [2024-10-13 11:17:24.350543] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.782 [2024-10-13 11:17:24.350549] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.782 [2024-10-13 11:17:24.350553] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350557] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd350) on tqpair=0x176ed30 00:13:42.782 [2024-10-13 11:17:24.350568] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350572] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350576] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x176ed30) 00:13:42.782 [2024-10-13 11:17:24.350583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.782 [2024-10-13 11:17:24.350600] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd350, cid 3, qid 0 00:13:42.782 [2024-10-13 11:17:24.350678] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.782 [2024-10-13 11:17:24.350686] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.782 [2024-10-13 11:17:24.350690] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350694] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd350) on tqpair=0x176ed30 00:13:42.782 [2024-10-13 11:17:24.350705] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350710] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350714] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x176ed30) 00:13:42.782 [2024-10-13 11:17:24.350722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.782 [2024-10-13 11:17:24.350740] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd350, cid 3, qid 0 00:13:42.782 [2024-10-13 11:17:24.350789] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.782 [2024-10-13 11:17:24.350796] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.782 [2024-10-13 11:17:24.350799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350803] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd350) on tqpair=0x176ed30 00:13:42.782 [2024-10-13 11:17:24.350815] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350819] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350823] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x176ed30) 00:13:42.782 [2024-10-13 11:17:24.350830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.782 [2024-10-13 11:17:24.350847] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd350, cid 3, qid 0 00:13:42.782 [2024-10-13 11:17:24.350900] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.782 [2024-10-13 11:17:24.350907] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.782 [2024-10-13 11:17:24.350911] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd350) on tqpair=0x176ed30 00:13:42.782 [2024-10-13 11:17:24.350926] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350930] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.350934] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x176ed30) 00:13:42.782 [2024-10-13 11:17:24.350942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.782 [2024-10-13 11:17:24.350959] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd350, cid 3, qid 0 00:13:42.782 [2024-10-13 11:17:24.351020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.782 [2024-10-13 11:17:24.351027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.782 [2024-10-13 11:17:24.351030] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.351034] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd350) on tqpair=0x176ed30 00:13:42.782 [2024-10-13 11:17:24.351045] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.351049] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.351053] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x176ed30) 00:13:42.782 [2024-10-13 11:17:24.351060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.782 [2024-10-13 11:17:24.351076] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd350, cid 3, qid 0 00:13:42.782 [2024-10-13 11:17:24.351122] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.782 [2024-10-13 11:17:24.351128] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.782 [2024-10-13 11:17:24.351132] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.351136] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd350) on tqpair=0x176ed30 00:13:42.782 [2024-10-13 11:17:24.351146] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.351151] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.782 [2024-10-13 11:17:24.351154] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x176ed30) 00:13:42.782 [2024-10-13 11:17:24.351162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.783 [2024-10-13 11:17:24.351178] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd350, cid 3, qid 0 00:13:42.783 [2024-10-13 11:17:24.351224] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.783 [2024-10-13 11:17:24.351230] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.783 [2024-10-13 11:17:24.351234] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.783 [2024-10-13 11:17:24.351238] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd350) on tqpair=0x176ed30 00:13:42.783 [2024-10-13 11:17:24.351248] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.783 [2024-10-13 11:17:24.351253] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.783 [2024-10-13 11:17:24.351256] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x176ed30) 00:13:42.783 [2024-10-13 11:17:24.351264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.783 [2024-10-13 11:17:24.351280] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd350, cid 3, qid 0 00:13:42.783 [2024-10-13 11:17:24.351331] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.783 [2024-10-13 11:17:24.351338] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.783 [2024-10-13 11:17:24.351342] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.783 [2024-10-13 11:17:24.351346] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd350) on tqpair=0x176ed30 00:13:42.783 [2024-10-13 11:17:24.355388] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:42.783 [2024-10-13 11:17:24.355408] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:42.783 [2024-10-13 11:17:24.355413] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x176ed30) 00:13:42.783 [2024-10-13 11:17:24.355421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:42.783 [2024-10-13 11:17:24.355445] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17cd350, cid 3, qid 0 00:13:42.783 [2024-10-13 11:17:24.355497] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:42.783 [2024-10-13 11:17:24.355504] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:42.783 [2024-10-13 11:17:24.355508] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:42.783 [2024-10-13 11:17:24.355512] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17cd350) on tqpair=0x176ed30 00:13:42.783 [2024-10-13 11:17:24.355520] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:13:42.783 00:13:42.783 11:17:24 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:13:43.046 [2024-10-13 11:17:24.394419] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:43.046 [2024-10-13 11:17:24.394465] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68153 ] 00:13:43.046 [2024-10-13 11:17:24.531172] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:13:43.046 [2024-10-13 11:17:24.531249] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:43.046 [2024-10-13 11:17:24.531257] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:43.046 [2024-10-13 11:17:24.531268] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:43.046 [2024-10-13 11:17:24.531280] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:13:43.046 [2024-10-13 11:17:24.535443] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:13:43.046 [2024-10-13 11:17:24.535529] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1cebd30 0 00:13:43.046 [2024-10-13 11:17:24.535595] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:43.046 [2024-10-13 11:17:24.535604] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:43.046 [2024-10-13 11:17:24.535608] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:43.046 [2024-10-13 11:17:24.535612] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:43.046 [2024-10-13 11:17:24.535651] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.535658] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.535662] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cebd30) 00:13:43.046 [2024-10-13 11:17:24.535675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:43.046 [2024-10-13 11:17:24.535728] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d49f30, cid 0, qid 0 00:13:43.046 [2024-10-13 11:17:24.543420] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.046 [2024-10-13 11:17:24.543441] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.046 [2024-10-13 11:17:24.543462] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.543467] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d49f30) on tqpair=0x1cebd30 00:13:43.046 [2024-10-13 11:17:24.543498] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:43.046 [2024-10-13 11:17:24.543506] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:13:43.046 [2024-10-13 11:17:24.543512] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:13:43.046 [2024-10-13 11:17:24.543528] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.543534] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.543538] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cebd30) 00:13:43.046 [2024-10-13 11:17:24.543548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.046 [2024-10-13 11:17:24.543575] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d49f30, cid 0, qid 0 00:13:43.046 [2024-10-13 11:17:24.543633] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.046 [2024-10-13 11:17:24.543641] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.046 [2024-10-13 11:17:24.543645] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.543649] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d49f30) on tqpair=0x1cebd30 00:13:43.046 [2024-10-13 11:17:24.543656] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:13:43.046 [2024-10-13 11:17:24.543664] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:13:43.046 [2024-10-13 11:17:24.543672] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.543676] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.543680] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cebd30) 00:13:43.046 [2024-10-13 11:17:24.543689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.046 [2024-10-13 11:17:24.543707] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d49f30, cid 0, qid 0 00:13:43.046 [2024-10-13 11:17:24.543757] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.046 [2024-10-13 11:17:24.543764] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.046 [2024-10-13 11:17:24.543768] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.543772] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d49f30) on tqpair=0x1cebd30 00:13:43.046 [2024-10-13 11:17:24.543779] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:13:43.046 [2024-10-13 11:17:24.543788] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:13:43.046 [2024-10-13 11:17:24.543796] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.543800] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.543804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cebd30) 00:13:43.046 [2024-10-13 11:17:24.543812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.046 [2024-10-13 11:17:24.543830] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d49f30, cid 0, qid 0 00:13:43.046 [2024-10-13 11:17:24.543884] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.046 [2024-10-13 11:17:24.543890] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.046 [2024-10-13 11:17:24.543894] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.543899] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d49f30) on tqpair=0x1cebd30 00:13:43.046 [2024-10-13 11:17:24.543906] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:43.046 [2024-10-13 11:17:24.543916] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.543921] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.543925] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cebd30) 00:13:43.046 [2024-10-13 11:17:24.543933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.046 [2024-10-13 11:17:24.543950] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d49f30, cid 0, qid 0 00:13:43.046 [2024-10-13 11:17:24.544007] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.046 [2024-10-13 11:17:24.544014] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.046 [2024-10-13 11:17:24.544017] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.046 [2024-10-13 11:17:24.544022] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d49f30) on tqpair=0x1cebd30 00:13:43.046 [2024-10-13 11:17:24.544028] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:13:43.047 [2024-10-13 11:17:24.544033] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:13:43.047 [2024-10-13 11:17:24.544042] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:43.047 [2024-10-13 11:17:24.544148] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:13:43.047 [2024-10-13 11:17:24.544153] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:43.047 [2024-10-13 11:17:24.544162] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544166] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544170] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cebd30) 00:13:43.047 [2024-10-13 11:17:24.544178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.047 [2024-10-13 11:17:24.544196] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d49f30, cid 0, qid 0 00:13:43.047 [2024-10-13 11:17:24.544247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.047 [2024-10-13 11:17:24.544253] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.047 [2024-10-13 11:17:24.544257] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544262] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d49f30) on tqpair=0x1cebd30 00:13:43.047 [2024-10-13 11:17:24.544268] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:43.047 [2024-10-13 11:17:24.544278] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544283] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cebd30) 00:13:43.047 [2024-10-13 11:17:24.544295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.047 [2024-10-13 11:17:24.544311] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d49f30, cid 0, qid 0 00:13:43.047 [2024-10-13 11:17:24.544374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.047 [2024-10-13 11:17:24.544382] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.047 [2024-10-13 11:17:24.544386] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544391] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d49f30) on tqpair=0x1cebd30 00:13:43.047 [2024-10-13 11:17:24.544397] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:43.047 [2024-10-13 11:17:24.544402] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:13:43.047 [2024-10-13 11:17:24.544410] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:13:43.047 [2024-10-13 11:17:24.544425] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:13:43.047 [2024-10-13 11:17:24.544436] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544441] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544445] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cebd30) 00:13:43.047 [2024-10-13 11:17:24.544454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.047 [2024-10-13 11:17:24.544475] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d49f30, cid 0, qid 0 00:13:43.047 [2024-10-13 11:17:24.544573] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:43.047 [2024-10-13 11:17:24.544581] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:43.047 [2024-10-13 11:17:24.544585] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544589] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cebd30): datao=0, datal=4096, cccid=0 00:13:43.047 [2024-10-13 11:17:24.544594] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d49f30) on tqpair(0x1cebd30): expected_datao=0, payload_size=4096 00:13:43.047 [2024-10-13 11:17:24.544604] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544608] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544617] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.047 [2024-10-13 11:17:24.544624] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.047 [2024-10-13 11:17:24.544628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544632] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d49f30) on tqpair=0x1cebd30 00:13:43.047 [2024-10-13 11:17:24.544643] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:13:43.047 [2024-10-13 11:17:24.544649] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:13:43.047 [2024-10-13 11:17:24.544653] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:13:43.047 [2024-10-13 11:17:24.544658] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:13:43.047 [2024-10-13 11:17:24.544663] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:13:43.047 [2024-10-13 11:17:24.544669] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:13:43.047 [2024-10-13 11:17:24.544683] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:13:43.047 [2024-10-13 11:17:24.544691] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544696] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544700] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cebd30) 00:13:43.047 [2024-10-13 11:17:24.544708] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:43.047 [2024-10-13 11:17:24.544728] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d49f30, cid 0, qid 0 00:13:43.047 [2024-10-13 11:17:24.544788] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.047 [2024-10-13 11:17:24.544795] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.047 [2024-10-13 11:17:24.544799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544803] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d49f30) on tqpair=0x1cebd30 00:13:43.047 [2024-10-13 11:17:24.544813] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544817] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544821] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cebd30) 00:13:43.047 [2024-10-13 11:17:24.544828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.047 [2024-10-13 11:17:24.544835] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544839] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544843] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1cebd30) 00:13:43.047 [2024-10-13 11:17:24.544849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.047 [2024-10-13 11:17:24.544856] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544860] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544864] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1cebd30) 00:13:43.047 [2024-10-13 11:17:24.544870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.047 [2024-10-13 11:17:24.544876] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544880] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544884] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.047 [2024-10-13 11:17:24.544890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.047 [2024-10-13 11:17:24.544895] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:43.047 [2024-10-13 11:17:24.544909] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:43.047 [2024-10-13 11:17:24.544916] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544920] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.544924] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cebd30) 00:13:43.047 [2024-10-13 11:17:24.544932] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.047 [2024-10-13 11:17:24.544952] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d49f30, cid 0, qid 0 00:13:43.047 [2024-10-13 11:17:24.544959] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a090, cid 1, qid 0 00:13:43.047 [2024-10-13 11:17:24.544964] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a1f0, cid 2, qid 0 00:13:43.047 [2024-10-13 11:17:24.544970] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.047 [2024-10-13 11:17:24.544975] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a4b0, cid 4, qid 0 00:13:43.047 [2024-10-13 11:17:24.545075] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.047 [2024-10-13 11:17:24.545081] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.047 [2024-10-13 11:17:24.545085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.545090] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a4b0) on tqpair=0x1cebd30 00:13:43.047 [2024-10-13 11:17:24.545097] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:13:43.047 [2024-10-13 11:17:24.545103] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:43.047 [2024-10-13 11:17:24.545111] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:13:43.047 [2024-10-13 11:17:24.545123] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:43.047 [2024-10-13 11:17:24.545130] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.545135] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.545139] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cebd30) 00:13:43.047 [2024-10-13 11:17:24.545147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:43.047 [2024-10-13 11:17:24.545166] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a4b0, cid 4, qid 0 00:13:43.047 [2024-10-13 11:17:24.545222] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.047 [2024-10-13 11:17:24.545229] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.047 [2024-10-13 11:17:24.545233] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.047 [2024-10-13 11:17:24.545237] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a4b0) on tqpair=0x1cebd30 00:13:43.047 [2024-10-13 11:17:24.545302] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:13:43.047 [2024-10-13 11:17:24.545337] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:43.048 [2024-10-13 11:17:24.545348] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545353] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545357] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cebd30) 00:13:43.048 [2024-10-13 11:17:24.545366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.048 [2024-10-13 11:17:24.545387] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a4b0, cid 4, qid 0 00:13:43.048 [2024-10-13 11:17:24.545457] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:43.048 [2024-10-13 11:17:24.545464] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:43.048 [2024-10-13 11:17:24.545468] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545472] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cebd30): datao=0, datal=4096, cccid=4 00:13:43.048 [2024-10-13 11:17:24.545477] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d4a4b0) on tqpair(0x1cebd30): expected_datao=0, payload_size=4096 00:13:43.048 [2024-10-13 11:17:24.545485] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545489] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545498] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.048 [2024-10-13 11:17:24.545504] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.048 [2024-10-13 11:17:24.545509] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545513] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a4b0) on tqpair=0x1cebd30 00:13:43.048 [2024-10-13 11:17:24.545530] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:13:43.048 [2024-10-13 11:17:24.545542] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:13:43.048 [2024-10-13 11:17:24.545552] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:13:43.048 [2024-10-13 11:17:24.545561] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545565] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545569] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cebd30) 00:13:43.048 [2024-10-13 11:17:24.545577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.048 [2024-10-13 11:17:24.545596] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a4b0, cid 4, qid 0 00:13:43.048 [2024-10-13 11:17:24.545677] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:43.048 [2024-10-13 11:17:24.545684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:43.048 [2024-10-13 11:17:24.545688] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545692] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cebd30): datao=0, datal=4096, cccid=4 00:13:43.048 [2024-10-13 11:17:24.545697] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d4a4b0) on tqpair(0x1cebd30): expected_datao=0, payload_size=4096 00:13:43.048 [2024-10-13 11:17:24.545705] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545724] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545733] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.048 [2024-10-13 11:17:24.545739] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.048 [2024-10-13 11:17:24.545743] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545747] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a4b0) on tqpair=0x1cebd30 00:13:43.048 [2024-10-13 11:17:24.545763] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:43.048 [2024-10-13 11:17:24.545774] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:43.048 [2024-10-13 11:17:24.545782] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545787] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545791] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cebd30) 00:13:43.048 [2024-10-13 11:17:24.545798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.048 [2024-10-13 11:17:24.545817] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a4b0, cid 4, qid 0 00:13:43.048 [2024-10-13 11:17:24.545884] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:43.048 [2024-10-13 11:17:24.545891] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:43.048 [2024-10-13 11:17:24.545894] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545898] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cebd30): datao=0, datal=4096, cccid=4 00:13:43.048 [2024-10-13 11:17:24.545903] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d4a4b0) on tqpair(0x1cebd30): expected_datao=0, payload_size=4096 00:13:43.048 [2024-10-13 11:17:24.545911] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545915] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545923] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.048 [2024-10-13 11:17:24.545930] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.048 [2024-10-13 11:17:24.545933] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.545937] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a4b0) on tqpair=0x1cebd30 00:13:43.048 [2024-10-13 11:17:24.545947] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:43.048 [2024-10-13 11:17:24.545955] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:13:43.048 [2024-10-13 11:17:24.545968] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:13:43.048 [2024-10-13 11:17:24.545975] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:43.048 [2024-10-13 11:17:24.545981] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:13:43.048 [2024-10-13 11:17:24.545986] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:13:43.048 [2024-10-13 11:17:24.545991] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:13:43.048 [2024-10-13 11:17:24.545996] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:13:43.048 [2024-10-13 11:17:24.546013] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546017] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546021] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cebd30) 00:13:43.048 [2024-10-13 11:17:24.546029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.048 [2024-10-13 11:17:24.546036] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546040] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546044] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cebd30) 00:13:43.048 [2024-10-13 11:17:24.546050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.048 [2024-10-13 11:17:24.546074] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a4b0, cid 4, qid 0 00:13:43.048 [2024-10-13 11:17:24.546082] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a610, cid 5, qid 0 00:13:43.048 [2024-10-13 11:17:24.546148] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.048 [2024-10-13 11:17:24.546155] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.048 [2024-10-13 11:17:24.546158] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546163] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a4b0) on tqpair=0x1cebd30 00:13:43.048 [2024-10-13 11:17:24.546171] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.048 [2024-10-13 11:17:24.546177] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.048 [2024-10-13 11:17:24.546180] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546184] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a610) on tqpair=0x1cebd30 00:13:43.048 [2024-10-13 11:17:24.546196] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546200] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546204] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cebd30) 00:13:43.048 [2024-10-13 11:17:24.546211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.048 [2024-10-13 11:17:24.546228] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a610, cid 5, qid 0 00:13:43.048 [2024-10-13 11:17:24.546282] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.048 [2024-10-13 11:17:24.546288] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.048 [2024-10-13 11:17:24.546292] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546296] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a610) on tqpair=0x1cebd30 00:13:43.048 [2024-10-13 11:17:24.546322] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546328] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546331] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cebd30) 00:13:43.048 [2024-10-13 11:17:24.546339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.048 [2024-10-13 11:17:24.546369] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a610, cid 5, qid 0 00:13:43.048 [2024-10-13 11:17:24.546420] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.048 [2024-10-13 11:17:24.546428] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.048 [2024-10-13 11:17:24.546431] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546436] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a610) on tqpair=0x1cebd30 00:13:43.048 [2024-10-13 11:17:24.546448] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546452] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546456] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cebd30) 00:13:43.048 [2024-10-13 11:17:24.546464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.048 [2024-10-13 11:17:24.546481] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a610, cid 5, qid 0 00:13:43.048 [2024-10-13 11:17:24.546537] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.048 [2024-10-13 11:17:24.546544] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.048 [2024-10-13 11:17:24.546548] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546552] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a610) on tqpair=0x1cebd30 00:13:43.048 [2024-10-13 11:17:24.546568] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546573] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.048 [2024-10-13 11:17:24.546577] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cebd30) 00:13:43.049 [2024-10-13 11:17:24.546584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.049 [2024-10-13 11:17:24.546592] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546596] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546600] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cebd30) 00:13:43.049 [2024-10-13 11:17:24.546619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.049 [2024-10-13 11:17:24.546628] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546633] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546636] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1cebd30) 00:13:43.049 [2024-10-13 11:17:24.546643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.049 [2024-10-13 11:17:24.546651] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546656] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546660] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1cebd30) 00:13:43.049 [2024-10-13 11:17:24.546666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.049 [2024-10-13 11:17:24.546687] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a610, cid 5, qid 0 00:13:43.049 [2024-10-13 11:17:24.546695] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a4b0, cid 4, qid 0 00:13:43.049 [2024-10-13 11:17:24.546700] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a770, cid 6, qid 0 00:13:43.049 [2024-10-13 11:17:24.546705] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a8d0, cid 7, qid 0 00:13:43.049 [2024-10-13 11:17:24.546840] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:43.049 [2024-10-13 11:17:24.546847] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:43.049 [2024-10-13 11:17:24.546850] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546855] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cebd30): datao=0, datal=8192, cccid=5 00:13:43.049 [2024-10-13 11:17:24.546860] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d4a610) on tqpair(0x1cebd30): expected_datao=0, payload_size=8192 00:13:43.049 [2024-10-13 11:17:24.546877] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546882] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546889] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:43.049 [2024-10-13 11:17:24.546895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:43.049 [2024-10-13 11:17:24.546898] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546902] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cebd30): datao=0, datal=512, cccid=4 00:13:43.049 [2024-10-13 11:17:24.546907] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d4a4b0) on tqpair(0x1cebd30): expected_datao=0, payload_size=512 00:13:43.049 [2024-10-13 11:17:24.546915] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546919] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546925] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:43.049 [2024-10-13 11:17:24.546931] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:43.049 [2024-10-13 11:17:24.546934] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546938] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cebd30): datao=0, datal=512, cccid=6 00:13:43.049 [2024-10-13 11:17:24.546943] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d4a770) on tqpair(0x1cebd30): expected_datao=0, payload_size=512 00:13:43.049 [2024-10-13 11:17:24.546950] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546954] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546960] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:43.049 [2024-10-13 11:17:24.546966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:43.049 [2024-10-13 11:17:24.546970] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.546974] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cebd30): datao=0, datal=4096, cccid=7 00:13:43.049 [2024-10-13 11:17:24.546978] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d4a8d0) on tqpair(0x1cebd30): expected_datao=0, payload_size=4096 00:13:43.049 [2024-10-13 11:17:24.547001] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.547005] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.547013] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.049 [2024-10-13 11:17:24.547019] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.049 [2024-10-13 11:17:24.547022] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.049 ===================================================== 00:13:43.049 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:43.049 ===================================================== 00:13:43.049 Controller Capabilities/Features 00:13:43.049 ================================ 00:13:43.049 Vendor ID: 8086 00:13:43.049 Subsystem Vendor ID: 8086 00:13:43.049 Serial Number: SPDK00000000000001 00:13:43.049 Model Number: SPDK bdev Controller 00:13:43.049 Firmware Version: 24.01.1 00:13:43.049 Recommended Arb Burst: 6 00:13:43.049 IEEE OUI Identifier: e4 d2 5c 00:13:43.049 Multi-path I/O 00:13:43.049 May have multiple subsystem ports: Yes 00:13:43.049 May have multiple controllers: Yes 00:13:43.049 Associated with SR-IOV VF: No 00:13:43.049 Max Data Transfer Size: 131072 00:13:43.049 Max Number of Namespaces: 32 00:13:43.049 Max Number of I/O Queues: 127 00:13:43.049 NVMe Specification Version (VS): 1.3 00:13:43.049 NVMe Specification Version (Identify): 1.3 00:13:43.049 Maximum Queue Entries: 128 00:13:43.049 Contiguous Queues Required: Yes 00:13:43.049 Arbitration Mechanisms Supported 00:13:43.049 Weighted Round Robin: Not Supported 00:13:43.049 Vendor Specific: Not Supported 00:13:43.049 Reset Timeout: 15000 ms 00:13:43.049 Doorbell Stride: 4 bytes 00:13:43.049 NVM Subsystem Reset: Not Supported 00:13:43.049 Command Sets Supported 00:13:43.049 NVM Command Set: Supported 00:13:43.049 Boot Partition: Not Supported 00:13:43.049 Memory Page Size Minimum: 4096 bytes 00:13:43.049 Memory Page Size Maximum: 4096 bytes 00:13:43.049 Persistent Memory Region: Not Supported 00:13:43.049 Optional Asynchronous Events Supported 00:13:43.049 Namespace Attribute Notices: Supported 00:13:43.049 Firmware Activation Notices: Not Supported 00:13:43.049 ANA Change Notices: Not Supported 00:13:43.049 PLE Aggregate Log Change Notices: Not Supported 00:13:43.049 LBA Status Info Alert Notices: Not Supported 00:13:43.049 EGE Aggregate Log Change Notices: Not Supported 00:13:43.049 Normal NVM Subsystem Shutdown event: Not Supported 00:13:43.049 Zone Descriptor Change Notices: Not Supported 00:13:43.049 Discovery Log Change Notices: Not Supported 00:13:43.049 Controller Attributes 00:13:43.049 128-bit Host Identifier: Supported 00:13:43.049 Non-Operational Permissive Mode: Not Supported 00:13:43.049 NVM Sets: Not Supported 00:13:43.049 Read Recovery Levels: Not Supported 00:13:43.049 Endurance Groups: Not Supported 00:13:43.049 Predictable Latency Mode: Not Supported 00:13:43.049 Traffic Based Keep ALive: Not Supported 00:13:43.049 Namespace Granularity: Not Supported 00:13:43.049 SQ Associations: Not Supported 00:13:43.049 UUID List: Not Supported 00:13:43.049 Multi-Domain Subsystem: Not Supported 00:13:43.049 Fixed Capacity Management: Not Supported 00:13:43.049 Variable Capacity Management: Not Supported 00:13:43.049 Delete Endurance Group: Not Supported 00:13:43.049 Delete NVM Set: Not Supported 00:13:43.049 Extended LBA Formats Supported: Not Supported 00:13:43.049 Flexible Data Placement Supported: Not Supported 00:13:43.049 00:13:43.049 Controller Memory Buffer Support 00:13:43.049 ================================ 00:13:43.049 Supported: No 00:13:43.049 00:13:43.049 Persistent Memory Region Support 00:13:43.049 ================================ 00:13:43.049 Supported: No 00:13:43.049 00:13:43.049 Admin Command Set Attributes 00:13:43.049 ============================ 00:13:43.049 Security Send/Receive: Not Supported 00:13:43.049 Format NVM: Not Supported 00:13:43.049 Firmware Activate/Download: Not Supported 00:13:43.049 Namespace Management: Not Supported 00:13:43.049 Device Self-Test: Not Supported 00:13:43.049 Directives: Not Supported 00:13:43.049 NVMe-MI: Not Supported 00:13:43.049 Virtualization Management: Not Supported 00:13:43.049 Doorbell Buffer Config: Not Supported 00:13:43.049 Get LBA Status Capability: Not Supported 00:13:43.049 Command & Feature Lockdown Capability: Not Supported 00:13:43.049 Abort Command Limit: 4 00:13:43.049 Async Event Request Limit: 4 00:13:43.049 Number of Firmware Slots: N/A 00:13:43.049 Firmware Slot 1 Read-Only: N/A 00:13:43.049 Firmware Activation Without Reset: [2024-10-13 11:17:24.547026] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a610) on tqpair=0x1cebd30 00:13:43.049 [2024-10-13 11:17:24.547044] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.049 [2024-10-13 11:17:24.547051] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.049 [2024-10-13 11:17:24.547055] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.547059] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a4b0) on tqpair=0x1cebd30 00:13:43.049 [2024-10-13 11:17:24.547070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.049 [2024-10-13 11:17:24.547076] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.049 [2024-10-13 11:17:24.547080] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.547084] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a770) on tqpair=0x1cebd30 00:13:43.049 [2024-10-13 11:17:24.547092] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.049 [2024-10-13 11:17:24.547098] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.049 [2024-10-13 11:17:24.547102] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.049 [2024-10-13 11:17:24.547105] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a8d0) on tqpair=0x1cebd30 00:13:43.049 N/A 00:13:43.049 Multiple Update Detection Support: N/A 00:13:43.049 Firmware Update Granularity: No Information Provided 00:13:43.050 Per-Namespace SMART Log: No 00:13:43.050 Asymmetric Namespace Access Log Page: Not Supported 00:13:43.050 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:13:43.050 Command Effects Log Page: Supported 00:13:43.050 Get Log Page Extended Data: Supported 00:13:43.050 Telemetry Log Pages: Not Supported 00:13:43.050 Persistent Event Log Pages: Not Supported 00:13:43.050 Supported Log Pages Log Page: May Support 00:13:43.050 Commands Supported & Effects Log Page: Not Supported 00:13:43.050 Feature Identifiers & Effects Log Page:May Support 00:13:43.050 NVMe-MI Commands & Effects Log Page: May Support 00:13:43.050 Data Area 4 for Telemetry Log: Not Supported 00:13:43.050 Error Log Page Entries Supported: 128 00:13:43.050 Keep Alive: Supported 00:13:43.050 Keep Alive Granularity: 10000 ms 00:13:43.050 00:13:43.050 NVM Command Set Attributes 00:13:43.050 ========================== 00:13:43.050 Submission Queue Entry Size 00:13:43.050 Max: 64 00:13:43.050 Min: 64 00:13:43.050 Completion Queue Entry Size 00:13:43.050 Max: 16 00:13:43.050 Min: 16 00:13:43.050 Number of Namespaces: 32 00:13:43.050 Compare Command: Supported 00:13:43.050 Write Uncorrectable Command: Not Supported 00:13:43.050 Dataset Management Command: Supported 00:13:43.050 Write Zeroes Command: Supported 00:13:43.050 Set Features Save Field: Not Supported 00:13:43.050 Reservations: Supported 00:13:43.050 Timestamp: Not Supported 00:13:43.050 Copy: Supported 00:13:43.050 Volatile Write Cache: Present 00:13:43.050 Atomic Write Unit (Normal): 1 00:13:43.050 Atomic Write Unit (PFail): 1 00:13:43.050 Atomic Compare & Write Unit: 1 00:13:43.050 Fused Compare & Write: Supported 00:13:43.050 Scatter-Gather List 00:13:43.050 SGL Command Set: Supported 00:13:43.050 SGL Keyed: Supported 00:13:43.050 SGL Bit Bucket Descriptor: Not Supported 00:13:43.050 SGL Metadata Pointer: Not Supported 00:13:43.050 Oversized SGL: Not Supported 00:13:43.050 SGL Metadata Address: Not Supported 00:13:43.050 SGL Offset: Supported 00:13:43.050 Transport SGL Data Block: Not Supported 00:13:43.050 Replay Protected Memory Block: Not Supported 00:13:43.050 00:13:43.050 Firmware Slot Information 00:13:43.050 ========================= 00:13:43.050 Active slot: 1 00:13:43.050 Slot 1 Firmware Revision: 24.01.1 00:13:43.050 00:13:43.050 00:13:43.050 Commands Supported and Effects 00:13:43.050 ============================== 00:13:43.050 Admin Commands 00:13:43.050 -------------- 00:13:43.050 Get Log Page (02h): Supported 00:13:43.050 Identify (06h): Supported 00:13:43.050 Abort (08h): Supported 00:13:43.050 Set Features (09h): Supported 00:13:43.050 Get Features (0Ah): Supported 00:13:43.050 Asynchronous Event Request (0Ch): Supported 00:13:43.050 Keep Alive (18h): Supported 00:13:43.050 I/O Commands 00:13:43.050 ------------ 00:13:43.050 Flush (00h): Supported LBA-Change 00:13:43.050 Write (01h): Supported LBA-Change 00:13:43.050 Read (02h): Supported 00:13:43.050 Compare (05h): Supported 00:13:43.050 Write Zeroes (08h): Supported LBA-Change 00:13:43.050 Dataset Management (09h): Supported LBA-Change 00:13:43.050 Copy (19h): Supported LBA-Change 00:13:43.050 Unknown (79h): Supported LBA-Change 00:13:43.050 Unknown (7Ah): Supported 00:13:43.050 00:13:43.050 Error Log 00:13:43.050 ========= 00:13:43.050 00:13:43.050 Arbitration 00:13:43.050 =========== 00:13:43.050 Arbitration Burst: 1 00:13:43.050 00:13:43.050 Power Management 00:13:43.050 ================ 00:13:43.050 Number of Power States: 1 00:13:43.050 Current Power State: Power State #0 00:13:43.050 Power State #0: 00:13:43.050 Max Power: 0.00 W 00:13:43.050 Non-Operational State: Operational 00:13:43.050 Entry Latency: Not Reported 00:13:43.050 Exit Latency: Not Reported 00:13:43.050 Relative Read Throughput: 0 00:13:43.050 Relative Read Latency: 0 00:13:43.050 Relative Write Throughput: 0 00:13:43.050 Relative Write Latency: 0 00:13:43.050 Idle Power: Not Reported 00:13:43.050 Active Power: Not Reported 00:13:43.050 Non-Operational Permissive Mode: Not Supported 00:13:43.050 00:13:43.050 Health Information 00:13:43.050 ================== 00:13:43.050 Critical Warnings: 00:13:43.050 Available Spare Space: OK 00:13:43.050 Temperature: OK 00:13:43.050 Device Reliability: OK 00:13:43.050 Read Only: No 00:13:43.050 Volatile Memory Backup: OK 00:13:43.050 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:43.050 Temperature Threshold: [2024-10-13 11:17:24.547216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.050 [2024-10-13 11:17:24.547222] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.050 [2024-10-13 11:17:24.547226] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1cebd30) 00:13:43.050 [2024-10-13 11:17:24.547234] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.050 [2024-10-13 11:17:24.547256] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a8d0, cid 7, qid 0 00:13:43.050 [2024-10-13 11:17:24.547327] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.050 [2024-10-13 11:17:24.547334] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.050 [2024-10-13 11:17:24.547338] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.050 [2024-10-13 11:17:24.547342] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a8d0) on tqpair=0x1cebd30 00:13:43.050 [2024-10-13 11:17:24.551406] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:13:43.050 [2024-10-13 11:17:24.551452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.050 [2024-10-13 11:17:24.551461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.050 [2024-10-13 11:17:24.551468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.050 [2024-10-13 11:17:24.551474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.050 [2024-10-13 11:17:24.551484] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.050 [2024-10-13 11:17:24.551489] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.050 [2024-10-13 11:17:24.551493] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.050 [2024-10-13 11:17:24.551502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.050 [2024-10-13 11:17:24.551529] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.050 [2024-10-13 11:17:24.551593] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.050 [2024-10-13 11:17:24.551600] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.050 [2024-10-13 11:17:24.551604] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.050 [2024-10-13 11:17:24.551608] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.050 [2024-10-13 11:17:24.551617] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.050 [2024-10-13 11:17:24.551622] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.050 [2024-10-13 11:17:24.551625] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.050 [2024-10-13 11:17:24.551633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.050 [2024-10-13 11:17:24.551654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.050 [2024-10-13 11:17:24.551759] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.050 [2024-10-13 11:17:24.551765] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.050 [2024-10-13 11:17:24.551769] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.050 [2024-10-13 11:17:24.551773] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.050 [2024-10-13 11:17:24.551779] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:13:43.050 [2024-10-13 11:17:24.551785] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:13:43.050 [2024-10-13 11:17:24.551794] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.050 [2024-10-13 11:17:24.551799] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.050 [2024-10-13 11:17:24.551803] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.050 [2024-10-13 11:17:24.551811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.050 [2024-10-13 11:17:24.551827] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.051 [2024-10-13 11:17:24.551884] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.051 [2024-10-13 11:17:24.551891] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.051 [2024-10-13 11:17:24.551894] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.551898] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.051 [2024-10-13 11:17:24.551910] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.551914] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.551919] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.051 [2024-10-13 11:17:24.551926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.051 [2024-10-13 11:17:24.551942] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.051 [2024-10-13 11:17:24.551996] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.051 [2024-10-13 11:17:24.552002] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.051 [2024-10-13 11:17:24.552006] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552010] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.051 [2024-10-13 11:17:24.552021] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552025] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552029] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.051 [2024-10-13 11:17:24.552036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.051 [2024-10-13 11:17:24.552052] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.051 [2024-10-13 11:17:24.552100] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.051 [2024-10-13 11:17:24.552106] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.051 [2024-10-13 11:17:24.552110] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552114] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.051 [2024-10-13 11:17:24.552125] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552130] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.051 [2024-10-13 11:17:24.552141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.051 [2024-10-13 11:17:24.552157] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.051 [2024-10-13 11:17:24.552207] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.051 [2024-10-13 11:17:24.552214] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.051 [2024-10-13 11:17:24.552217] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.051 [2024-10-13 11:17:24.552232] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552237] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552240] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.051 [2024-10-13 11:17:24.552248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.051 [2024-10-13 11:17:24.552264] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.051 [2024-10-13 11:17:24.552324] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.051 [2024-10-13 11:17:24.552331] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.051 [2024-10-13 11:17:24.552335] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552339] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.051 [2024-10-13 11:17:24.552350] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552355] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552359] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.051 [2024-10-13 11:17:24.552367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.051 [2024-10-13 11:17:24.552397] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.051 [2024-10-13 11:17:24.552455] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.051 [2024-10-13 11:17:24.552462] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.051 [2024-10-13 11:17:24.552466] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552470] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.051 [2024-10-13 11:17:24.552482] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552486] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552490] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.051 [2024-10-13 11:17:24.552498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.051 [2024-10-13 11:17:24.552516] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.051 [2024-10-13 11:17:24.552572] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.051 [2024-10-13 11:17:24.552579] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.051 [2024-10-13 11:17:24.552582] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552587] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.051 [2024-10-13 11:17:24.552598] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552603] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552607] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.051 [2024-10-13 11:17:24.552614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.051 [2024-10-13 11:17:24.552631] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.051 [2024-10-13 11:17:24.552700] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.051 [2024-10-13 11:17:24.552706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.051 [2024-10-13 11:17:24.552710] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552714] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.051 [2024-10-13 11:17:24.552725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552729] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552733] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.051 [2024-10-13 11:17:24.552740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.051 [2024-10-13 11:17:24.552756] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.051 [2024-10-13 11:17:24.552802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.051 [2024-10-13 11:17:24.552808] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.051 [2024-10-13 11:17:24.552812] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.051 [2024-10-13 11:17:24.552827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552831] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552835] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.051 [2024-10-13 11:17:24.552842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.051 [2024-10-13 11:17:24.552858] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.051 [2024-10-13 11:17:24.552905] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.051 [2024-10-13 11:17:24.552913] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.051 [2024-10-13 11:17:24.552917] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552921] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.051 [2024-10-13 11:17:24.552932] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552937] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.552941] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.051 [2024-10-13 11:17:24.552948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.051 [2024-10-13 11:17:24.552965] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.051 [2024-10-13 11:17:24.553019] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.051 [2024-10-13 11:17:24.553025] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.051 [2024-10-13 11:17:24.553029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.553033] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.051 [2024-10-13 11:17:24.553044] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.553049] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.553053] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.051 [2024-10-13 11:17:24.553060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.051 [2024-10-13 11:17:24.553076] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.051 [2024-10-13 11:17:24.553124] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.051 [2024-10-13 11:17:24.553131] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.051 [2024-10-13 11:17:24.553134] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.553138] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.051 [2024-10-13 11:17:24.553149] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.553154] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.553158] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.051 [2024-10-13 11:17:24.553165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.051 [2024-10-13 11:17:24.553181] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.051 [2024-10-13 11:17:24.553234] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.051 [2024-10-13 11:17:24.553241] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.051 [2024-10-13 11:17:24.553244] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.553248] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.051 [2024-10-13 11:17:24.553259] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.553264] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.051 [2024-10-13 11:17:24.553268] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.052 [2024-10-13 11:17:24.553275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.052 [2024-10-13 11:17:24.553292] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.052 [2024-10-13 11:17:24.553369] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.052 [2024-10-13 11:17:24.553381] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.052 [2024-10-13 11:17:24.553385] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553389] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.052 [2024-10-13 11:17:24.553402] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553406] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553410] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.052 [2024-10-13 11:17:24.553418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.052 [2024-10-13 11:17:24.553438] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.052 [2024-10-13 11:17:24.553494] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.052 [2024-10-13 11:17:24.553501] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.052 [2024-10-13 11:17:24.553505] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553510] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.052 [2024-10-13 11:17:24.553521] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553526] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553530] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.052 [2024-10-13 11:17:24.553538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.052 [2024-10-13 11:17:24.553554] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.052 [2024-10-13 11:17:24.553603] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.052 [2024-10-13 11:17:24.553610] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.052 [2024-10-13 11:17:24.553614] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553618] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.052 [2024-10-13 11:17:24.553630] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553635] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553638] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.052 [2024-10-13 11:17:24.553646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.052 [2024-10-13 11:17:24.553663] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.052 [2024-10-13 11:17:24.553726] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.052 [2024-10-13 11:17:24.553732] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.052 [2024-10-13 11:17:24.553736] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553740] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.052 [2024-10-13 11:17:24.553751] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553755] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553759] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.052 [2024-10-13 11:17:24.553766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.052 [2024-10-13 11:17:24.553782] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.052 [2024-10-13 11:17:24.553836] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.052 [2024-10-13 11:17:24.553842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.052 [2024-10-13 11:17:24.553846] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553850] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.052 [2024-10-13 11:17:24.553861] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553866] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553869] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.052 [2024-10-13 11:17:24.553877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.052 [2024-10-13 11:17:24.553893] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.052 [2024-10-13 11:17:24.553944] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.052 [2024-10-13 11:17:24.553955] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.052 [2024-10-13 11:17:24.553959] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553963] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.052 [2024-10-13 11:17:24.553975] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553980] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.553983] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.052 [2024-10-13 11:17:24.553991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.052 [2024-10-13 11:17:24.554008] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.052 [2024-10-13 11:17:24.554056] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.052 [2024-10-13 11:17:24.554062] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.052 [2024-10-13 11:17:24.554066] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554070] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.052 [2024-10-13 11:17:24.554081] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554086] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554089] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.052 [2024-10-13 11:17:24.554097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.052 [2024-10-13 11:17:24.554113] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.052 [2024-10-13 11:17:24.554166] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.052 [2024-10-13 11:17:24.554173] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.052 [2024-10-13 11:17:24.554177] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554181] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.052 [2024-10-13 11:17:24.554192] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554196] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554200] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.052 [2024-10-13 11:17:24.554207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.052 [2024-10-13 11:17:24.554223] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.052 [2024-10-13 11:17:24.554289] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.052 [2024-10-13 11:17:24.554295] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.052 [2024-10-13 11:17:24.554299] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554303] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.052 [2024-10-13 11:17:24.554315] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554320] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554323] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.052 [2024-10-13 11:17:24.554331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.052 [2024-10-13 11:17:24.554360] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.052 [2024-10-13 11:17:24.554418] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.052 [2024-10-13 11:17:24.554425] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.052 [2024-10-13 11:17:24.554429] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554433] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.052 [2024-10-13 11:17:24.554445] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554450] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554454] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.052 [2024-10-13 11:17:24.554462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.052 [2024-10-13 11:17:24.554480] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.052 [2024-10-13 11:17:24.554529] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.052 [2024-10-13 11:17:24.554536] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.052 [2024-10-13 11:17:24.554540] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554544] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.052 [2024-10-13 11:17:24.554555] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554560] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554564] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.052 [2024-10-13 11:17:24.554572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.052 [2024-10-13 11:17:24.554588] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.052 [2024-10-13 11:17:24.554653] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.052 [2024-10-13 11:17:24.554660] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.052 [2024-10-13 11:17:24.554664] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554668] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.052 [2024-10-13 11:17:24.554680] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554685] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554689] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.052 [2024-10-13 11:17:24.554696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.052 [2024-10-13 11:17:24.554714] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.052 [2024-10-13 11:17:24.554767] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.052 [2024-10-13 11:17:24.554773] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.052 [2024-10-13 11:17:24.554777] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.052 [2024-10-13 11:17:24.554781] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.052 [2024-10-13 11:17:24.554793] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.554797] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.554801] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.053 [2024-10-13 11:17:24.554809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.053 [2024-10-13 11:17:24.554826] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.053 [2024-10-13 11:17:24.554874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.053 [2024-10-13 11:17:24.554881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.053 [2024-10-13 11:17:24.554885] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.554889] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.053 [2024-10-13 11:17:24.554900] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.554905] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.554909] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.053 [2024-10-13 11:17:24.554916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.053 [2024-10-13 11:17:24.554933] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.053 [2024-10-13 11:17:24.554980] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.053 [2024-10-13 11:17:24.554991] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.053 [2024-10-13 11:17:24.554996] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.555000] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.053 [2024-10-13 11:17:24.555012] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.555017] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.555021] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.053 [2024-10-13 11:17:24.555028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.053 [2024-10-13 11:17:24.555046] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.053 [2024-10-13 11:17:24.555099] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.053 [2024-10-13 11:17:24.555115] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.053 [2024-10-13 11:17:24.555119] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.555124] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.053 [2024-10-13 11:17:24.555136] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.555141] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.555145] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.053 [2024-10-13 11:17:24.555153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.053 [2024-10-13 11:17:24.555172] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.053 [2024-10-13 11:17:24.555233] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.053 [2024-10-13 11:17:24.555239] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.053 [2024-10-13 11:17:24.555243] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.555247] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.053 [2024-10-13 11:17:24.555258] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.555262] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.555266] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.053 [2024-10-13 11:17:24.555274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.053 [2024-10-13 11:17:24.555290] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.053 [2024-10-13 11:17:24.559394] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.053 [2024-10-13 11:17:24.559413] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.053 [2024-10-13 11:17:24.559418] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.559438] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.053 [2024-10-13 11:17:24.559453] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.559458] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.559462] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cebd30) 00:13:43.053 [2024-10-13 11:17:24.559470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.053 [2024-10-13 11:17:24.559494] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d4a350, cid 3, qid 0 00:13:43.053 [2024-10-13 11:17:24.559547] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:43.053 [2024-10-13 11:17:24.559553] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:43.053 [2024-10-13 11:17:24.559557] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:43.053 [2024-10-13 11:17:24.559561] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d4a350) on tqpair=0x1cebd30 00:13:43.053 [2024-10-13 11:17:24.559570] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:13:43.053 0 Kelvin (-273 Celsius) 00:13:43.053 Available Spare: 0% 00:13:43.053 Available Spare Threshold: 0% 00:13:43.053 Life Percentage Used: 0% 00:13:43.053 Data Units Read: 0 00:13:43.053 Data Units Written: 0 00:13:43.053 Host Read Commands: 0 00:13:43.053 Host Write Commands: 0 00:13:43.053 Controller Busy Time: 0 minutes 00:13:43.053 Power Cycles: 0 00:13:43.053 Power On Hours: 0 hours 00:13:43.053 Unsafe Shutdowns: 0 00:13:43.053 Unrecoverable Media Errors: 0 00:13:43.053 Lifetime Error Log Entries: 0 00:13:43.053 Warning Temperature Time: 0 minutes 00:13:43.053 Critical Temperature Time: 0 minutes 00:13:43.053 00:13:43.053 Number of Queues 00:13:43.053 ================ 00:13:43.053 Number of I/O Submission Queues: 127 00:13:43.053 Number of I/O Completion Queues: 127 00:13:43.053 00:13:43.053 Active Namespaces 00:13:43.053 ================= 00:13:43.053 Namespace ID:1 00:13:43.053 Error Recovery Timeout: Unlimited 00:13:43.053 Command Set Identifier: NVM (00h) 00:13:43.053 Deallocate: Supported 00:13:43.053 Deallocated/Unwritten Error: Not Supported 00:13:43.053 Deallocated Read Value: Unknown 00:13:43.053 Deallocate in Write Zeroes: Not Supported 00:13:43.053 Deallocated Guard Field: 0xFFFF 00:13:43.053 Flush: Supported 00:13:43.053 Reservation: Supported 00:13:43.053 Namespace Sharing Capabilities: Multiple Controllers 00:13:43.053 Size (in LBAs): 131072 (0GiB) 00:13:43.053 Capacity (in LBAs): 131072 (0GiB) 00:13:43.053 Utilization (in LBAs): 131072 (0GiB) 00:13:43.053 NGUID: ABCDEF0123456789ABCDEF0123456789 00:13:43.053 EUI64: ABCDEF0123456789 00:13:43.053 UUID: a5c897bb-338a-4926-8985-6123c012026e 00:13:43.053 Thin Provisioning: Not Supported 00:13:43.053 Per-NS Atomic Units: Yes 00:13:43.053 Atomic Boundary Size (Normal): 0 00:13:43.053 Atomic Boundary Size (PFail): 0 00:13:43.053 Atomic Boundary Offset: 0 00:13:43.053 Maximum Single Source Range Length: 65535 00:13:43.053 Maximum Copy Length: 65535 00:13:43.053 Maximum Source Range Count: 1 00:13:43.053 NGUID/EUI64 Never Reused: No 00:13:43.053 Namespace Write Protected: No 00:13:43.053 Number of LBA Formats: 1 00:13:43.053 Current LBA Format: LBA Format #00 00:13:43.053 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:43.053 00:13:43.053 11:17:24 -- host/identify.sh@51 -- # sync 00:13:43.053 11:17:24 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.053 11:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.053 11:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:43.053 11:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.053 11:17:24 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:13:43.053 11:17:24 -- host/identify.sh@56 -- # nvmftestfini 00:13:43.053 11:17:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:43.053 11:17:24 -- nvmf/common.sh@116 -- # sync 00:13:43.312 11:17:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:43.312 11:17:24 -- nvmf/common.sh@119 -- # set +e 00:13:43.312 11:17:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:43.312 11:17:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:43.312 rmmod nvme_tcp 00:13:43.312 rmmod nvme_fabrics 00:13:43.312 rmmod nvme_keyring 00:13:43.312 11:17:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:43.312 11:17:24 -- nvmf/common.sh@123 -- # set -e 00:13:43.312 11:17:24 -- nvmf/common.sh@124 -- # return 0 00:13:43.312 11:17:24 -- nvmf/common.sh@477 -- # '[' -n 68105 ']' 00:13:43.312 11:17:24 -- nvmf/common.sh@478 -- # killprocess 68105 00:13:43.312 11:17:24 -- common/autotest_common.sh@926 -- # '[' -z 68105 ']' 00:13:43.312 11:17:24 -- common/autotest_common.sh@930 -- # kill -0 68105 00:13:43.312 11:17:24 -- common/autotest_common.sh@931 -- # uname 00:13:43.312 11:17:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:43.312 11:17:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68105 00:13:43.312 killing process with pid 68105 00:13:43.312 11:17:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:43.312 11:17:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:43.312 11:17:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68105' 00:13:43.312 11:17:24 -- common/autotest_common.sh@945 -- # kill 68105 00:13:43.312 [2024-10-13 11:17:24.737631] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:43.312 11:17:24 -- common/autotest_common.sh@950 -- # wait 68105 00:13:43.571 11:17:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:43.571 11:17:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:43.571 11:17:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:43.571 11:17:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.571 11:17:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:43.571 11:17:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.571 11:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.571 11:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.571 11:17:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:43.571 00:13:43.571 real 0m2.477s 00:13:43.571 user 0m7.186s 00:13:43.571 sys 0m0.556s 00:13:43.571 11:17:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.571 11:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:43.571 ************************************ 00:13:43.571 END TEST nvmf_identify 00:13:43.571 ************************************ 00:13:43.571 11:17:25 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:43.571 11:17:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:43.571 11:17:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:43.571 11:17:25 -- common/autotest_common.sh@10 -- # set +x 00:13:43.571 ************************************ 00:13:43.571 START TEST nvmf_perf 00:13:43.571 ************************************ 00:13:43.571 11:17:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:43.571 * Looking for test storage... 00:13:43.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:43.571 11:17:25 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:43.571 11:17:25 -- nvmf/common.sh@7 -- # uname -s 00:13:43.571 11:17:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.571 11:17:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.572 11:17:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.572 11:17:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.572 11:17:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.572 11:17:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.572 11:17:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.572 11:17:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.572 11:17:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.572 11:17:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.572 11:17:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:13:43.572 11:17:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:13:43.572 11:17:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.572 11:17:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.572 11:17:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:43.572 11:17:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:43.572 11:17:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.572 11:17:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.572 11:17:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.572 11:17:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.572 11:17:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.572 11:17:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.572 11:17:25 -- paths/export.sh@5 -- # export PATH 00:13:43.572 11:17:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.572 11:17:25 -- nvmf/common.sh@46 -- # : 0 00:13:43.572 11:17:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:43.572 11:17:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:43.572 11:17:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:43.572 11:17:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.572 11:17:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.572 11:17:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:43.572 11:17:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:43.572 11:17:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:43.572 11:17:25 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:43.572 11:17:25 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:43.572 11:17:25 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:43.572 11:17:25 -- host/perf.sh@17 -- # nvmftestinit 00:13:43.572 11:17:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:43.572 11:17:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.572 11:17:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:43.572 11:17:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:43.572 11:17:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:43.572 11:17:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.572 11:17:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.572 11:17:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.572 11:17:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:43.572 11:17:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:43.572 11:17:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:43.572 11:17:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:43.572 11:17:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:43.572 11:17:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:43.572 11:17:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.572 11:17:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.572 11:17:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:43.572 11:17:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:43.572 11:17:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:43.572 11:17:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:43.572 11:17:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:43.572 11:17:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.572 11:17:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:43.572 11:17:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:43.572 11:17:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:43.572 11:17:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:43.572 11:17:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:43.572 11:17:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:43.572 Cannot find device "nvmf_tgt_br" 00:13:43.572 11:17:25 -- nvmf/common.sh@154 -- # true 00:13:43.572 11:17:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:43.830 Cannot find device "nvmf_tgt_br2" 00:13:43.830 11:17:25 -- nvmf/common.sh@155 -- # true 00:13:43.830 11:17:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:43.830 11:17:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:43.830 Cannot find device "nvmf_tgt_br" 00:13:43.830 11:17:25 -- nvmf/common.sh@157 -- # true 00:13:43.830 11:17:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:43.830 Cannot find device "nvmf_tgt_br2" 00:13:43.830 11:17:25 -- nvmf/common.sh@158 -- # true 00:13:43.830 11:17:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:43.830 11:17:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:43.830 11:17:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:43.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.830 11:17:25 -- nvmf/common.sh@161 -- # true 00:13:43.830 11:17:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:43.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.830 11:17:25 -- nvmf/common.sh@162 -- # true 00:13:43.830 11:17:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:43.830 11:17:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:43.830 11:17:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:43.830 11:17:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:43.831 11:17:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:43.831 11:17:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:43.831 11:17:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:43.831 11:17:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:43.831 11:17:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:43.831 11:17:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:43.831 11:17:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:43.831 11:17:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:43.831 11:17:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:43.831 11:17:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:43.831 11:17:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:43.831 11:17:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:43.831 11:17:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:43.831 11:17:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:43.831 11:17:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:44.089 11:17:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:44.089 11:17:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:44.089 11:17:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:44.089 11:17:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:44.089 11:17:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:44.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:13:44.089 00:13:44.089 --- 10.0.0.2 ping statistics --- 00:13:44.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.089 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:44.089 11:17:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:44.089 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:44.089 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:13:44.089 00:13:44.089 --- 10.0.0.3 ping statistics --- 00:13:44.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.089 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:44.089 11:17:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:44.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:44.089 00:13:44.089 --- 10.0.0.1 ping statistics --- 00:13:44.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.089 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:44.089 11:17:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.089 11:17:25 -- nvmf/common.sh@421 -- # return 0 00:13:44.089 11:17:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:44.089 11:17:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.089 11:17:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:44.089 11:17:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:44.089 11:17:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.089 11:17:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:44.089 11:17:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:44.089 11:17:25 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:13:44.089 11:17:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:44.089 11:17:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:44.089 11:17:25 -- common/autotest_common.sh@10 -- # set +x 00:13:44.089 11:17:25 -- nvmf/common.sh@469 -- # nvmfpid=68317 00:13:44.089 11:17:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:44.089 11:17:25 -- nvmf/common.sh@470 -- # waitforlisten 68317 00:13:44.089 11:17:25 -- common/autotest_common.sh@819 -- # '[' -z 68317 ']' 00:13:44.089 11:17:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.089 11:17:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:44.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.089 11:17:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.089 11:17:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:44.089 11:17:25 -- common/autotest_common.sh@10 -- # set +x 00:13:44.089 [2024-10-13 11:17:25.583008] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:44.089 [2024-10-13 11:17:25.583109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.348 [2024-10-13 11:17:25.725289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.348 [2024-10-13 11:17:25.793595] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:44.348 [2024-10-13 11:17:25.793785] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.348 [2024-10-13 11:17:25.793801] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.348 [2024-10-13 11:17:25.793812] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.348 [2024-10-13 11:17:25.793993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.348 [2024-10-13 11:17:25.794159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.348 [2024-10-13 11:17:25.794658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.348 [2024-10-13 11:17:25.794681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.285 11:17:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:45.285 11:17:26 -- common/autotest_common.sh@852 -- # return 0 00:13:45.285 11:17:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:45.285 11:17:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:45.285 11:17:26 -- common/autotest_common.sh@10 -- # set +x 00:13:45.285 11:17:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.285 11:17:26 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:45.285 11:17:26 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:45.544 11:17:27 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:13:45.544 11:17:27 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:13:45.803 11:17:27 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:13:45.803 11:17:27 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:46.061 11:17:27 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:13:46.062 11:17:27 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:13:46.062 11:17:27 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:13:46.062 11:17:27 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:13:46.062 11:17:27 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:46.321 [2024-10-13 11:17:27.766091] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.321 11:17:27 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:46.579 11:17:28 -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:46.579 11:17:28 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:46.851 11:17:28 -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:46.851 11:17:28 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:13:47.120 11:17:28 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.120 [2024-10-13 11:17:28.719387] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.379 11:17:28 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:47.637 11:17:29 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:13:47.637 11:17:29 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:47.637 11:17:29 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:13:47.637 11:17:29 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:48.572 Initializing NVMe Controllers 00:13:48.572 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:13:48.572 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:13:48.572 Initialization complete. Launching workers. 00:13:48.572 ======================================================== 00:13:48.572 Latency(us) 00:13:48.572 Device Information : IOPS MiB/s Average min max 00:13:48.572 PCIE (0000:00:06.0) NSID 1 from core 0: 23394.73 91.39 1367.94 339.99 7944.92 00:13:48.572 ======================================================== 00:13:48.572 Total : 23394.73 91.39 1367.94 339.99 7944.92 00:13:48.572 00:13:48.572 11:17:30 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:49.949 Initializing NVMe Controllers 00:13:49.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:49.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:49.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:49.949 Initialization complete. Launching workers. 00:13:49.949 ======================================================== 00:13:49.949 Latency(us) 00:13:49.949 Device Information : IOPS MiB/s Average min max 00:13:49.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3703.94 14.47 269.67 97.98 7231.43 00:13:49.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.00 0.48 8252.82 4973.31 12086.46 00:13:49.949 ======================================================== 00:13:49.949 Total : 3825.94 14.95 524.23 97.98 12086.46 00:13:49.949 00:13:49.949 11:17:31 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:51.327 Initializing NVMe Controllers 00:13:51.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:51.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:51.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:51.327 Initialization complete. Launching workers. 00:13:51.327 ======================================================== 00:13:51.327 Latency(us) 00:13:51.327 Device Information : IOPS MiB/s Average min max 00:13:51.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9001.26 35.16 3558.43 502.14 7387.58 00:13:51.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3987.03 15.57 8072.00 6785.31 15331.12 00:13:51.327 ======================================================== 00:13:51.327 Total : 12988.28 50.74 4943.96 502.14 15331.12 00:13:51.327 00:13:51.327 11:17:32 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:13:51.327 11:17:32 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:53.861 Initializing NVMe Controllers 00:13:53.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:53.861 Controller IO queue size 128, less than required. 00:13:53.861 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:53.861 Controller IO queue size 128, less than required. 00:13:53.861 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:53.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:53.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:53.861 Initialization complete. Launching workers. 00:13:53.861 ======================================================== 00:13:53.861 Latency(us) 00:13:53.861 Device Information : IOPS MiB/s Average min max 00:13:53.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1988.62 497.16 65864.76 38252.32 120484.04 00:13:53.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 689.62 172.40 196928.15 105946.56 308559.84 00:13:53.861 ======================================================== 00:13:53.861 Total : 2678.24 669.56 99612.06 38252.32 308559.84 00:13:53.861 00:13:53.861 11:17:35 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:13:54.120 No valid NVMe controllers or AIO or URING devices found 00:13:54.120 Initializing NVMe Controllers 00:13:54.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:54.120 Controller IO queue size 128, less than required. 00:13:54.120 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:54.120 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:13:54.120 Controller IO queue size 128, less than required. 00:13:54.120 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:54.120 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:13:54.120 WARNING: Some requested NVMe devices were skipped 00:13:54.120 11:17:35 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:13:56.654 Initializing NVMe Controllers 00:13:56.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:56.654 Controller IO queue size 128, less than required. 00:13:56.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:56.654 Controller IO queue size 128, less than required. 00:13:56.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:56.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:56.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:56.655 Initialization complete. Launching workers. 00:13:56.655 00:13:56.655 ==================== 00:13:56.655 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:13:56.655 TCP transport: 00:13:56.655 polls: 8096 00:13:56.655 idle_polls: 0 00:13:56.655 sock_completions: 8096 00:13:56.655 nvme_completions: 6887 00:13:56.655 submitted_requests: 10541 00:13:56.655 queued_requests: 1 00:13:56.655 00:13:56.655 ==================== 00:13:56.655 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:13:56.655 TCP transport: 00:13:56.655 polls: 8432 00:13:56.655 idle_polls: 0 00:13:56.655 sock_completions: 8432 00:13:56.655 nvme_completions: 6480 00:13:56.655 submitted_requests: 9924 00:13:56.655 queued_requests: 1 00:13:56.655 ======================================================== 00:13:56.655 Latency(us) 00:13:56.655 Device Information : IOPS MiB/s Average min max 00:13:56.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1780.94 445.23 73598.73 35646.73 124833.83 00:13:56.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1679.20 419.80 76386.64 37532.19 119839.45 00:13:56.655 ======================================================== 00:13:56.655 Total : 3460.14 865.03 74951.70 35646.73 124833.83 00:13:56.655 00:13:56.655 11:17:38 -- host/perf.sh@66 -- # sync 00:13:56.655 11:17:38 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.913 11:17:38 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:13:56.913 11:17:38 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:13:56.913 11:17:38 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:13:57.171 11:17:38 -- host/perf.sh@72 -- # ls_guid=ad90e690-2bf1-4343-a3f7-05d6a76819c6 00:13:57.171 11:17:38 -- host/perf.sh@73 -- # get_lvs_free_mb ad90e690-2bf1-4343-a3f7-05d6a76819c6 00:13:57.171 11:17:38 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ad90e690-2bf1-4343-a3f7-05d6a76819c6 00:13:57.171 11:17:38 -- common/autotest_common.sh@1344 -- # local lvs_info 00:13:57.171 11:17:38 -- common/autotest_common.sh@1345 -- # local fc 00:13:57.171 11:17:38 -- common/autotest_common.sh@1346 -- # local cs 00:13:57.171 11:17:38 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:57.430 11:17:38 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:13:57.430 { 00:13:57.430 "uuid": "ad90e690-2bf1-4343-a3f7-05d6a76819c6", 00:13:57.430 "name": "lvs_0", 00:13:57.430 "base_bdev": "Nvme0n1", 00:13:57.430 "total_data_clusters": 1278, 00:13:57.430 "free_clusters": 1278, 00:13:57.430 "block_size": 4096, 00:13:57.430 "cluster_size": 4194304 00:13:57.430 } 00:13:57.430 ]' 00:13:57.430 11:17:38 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ad90e690-2bf1-4343-a3f7-05d6a76819c6") .free_clusters' 00:13:57.430 11:17:38 -- common/autotest_common.sh@1348 -- # fc=1278 00:13:57.430 11:17:38 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ad90e690-2bf1-4343-a3f7-05d6a76819c6") .cluster_size' 00:13:57.430 11:17:39 -- common/autotest_common.sh@1349 -- # cs=4194304 00:13:57.430 11:17:39 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:13:57.430 11:17:39 -- common/autotest_common.sh@1353 -- # echo 5112 00:13:57.430 5112 00:13:57.430 11:17:39 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:13:57.430 11:17:39 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad90e690-2bf1-4343-a3f7-05d6a76819c6 lbd_0 5112 00:13:57.997 11:17:39 -- host/perf.sh@80 -- # lb_guid=db76cb49-40af-4671-9b3a-f59131487b10 00:13:57.997 11:17:39 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore db76cb49-40af-4671-9b3a-f59131487b10 lvs_n_0 00:13:58.255 11:17:39 -- host/perf.sh@83 -- # ls_nested_guid=ecdad1c3-e91a-4649-b6d6-8167739e9797 00:13:58.255 11:17:39 -- host/perf.sh@84 -- # get_lvs_free_mb ecdad1c3-e91a-4649-b6d6-8167739e9797 00:13:58.255 11:17:39 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ecdad1c3-e91a-4649-b6d6-8167739e9797 00:13:58.255 11:17:39 -- common/autotest_common.sh@1344 -- # local lvs_info 00:13:58.255 11:17:39 -- common/autotest_common.sh@1345 -- # local fc 00:13:58.255 11:17:39 -- common/autotest_common.sh@1346 -- # local cs 00:13:58.255 11:17:39 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:58.514 11:17:39 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:13:58.514 { 00:13:58.514 "uuid": "ad90e690-2bf1-4343-a3f7-05d6a76819c6", 00:13:58.514 "name": "lvs_0", 00:13:58.514 "base_bdev": "Nvme0n1", 00:13:58.514 "total_data_clusters": 1278, 00:13:58.514 "free_clusters": 0, 00:13:58.514 "block_size": 4096, 00:13:58.514 "cluster_size": 4194304 00:13:58.514 }, 00:13:58.514 { 00:13:58.514 "uuid": "ecdad1c3-e91a-4649-b6d6-8167739e9797", 00:13:58.514 "name": "lvs_n_0", 00:13:58.514 "base_bdev": "db76cb49-40af-4671-9b3a-f59131487b10", 00:13:58.514 "total_data_clusters": 1276, 00:13:58.514 "free_clusters": 1276, 00:13:58.514 "block_size": 4096, 00:13:58.514 "cluster_size": 4194304 00:13:58.514 } 00:13:58.514 ]' 00:13:58.514 11:17:39 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ecdad1c3-e91a-4649-b6d6-8167739e9797") .free_clusters' 00:13:58.514 11:17:39 -- common/autotest_common.sh@1348 -- # fc=1276 00:13:58.514 11:17:39 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ecdad1c3-e91a-4649-b6d6-8167739e9797") .cluster_size' 00:13:58.514 5104 00:13:58.514 11:17:40 -- common/autotest_common.sh@1349 -- # cs=4194304 00:13:58.514 11:17:40 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:13:58.514 11:17:40 -- common/autotest_common.sh@1353 -- # echo 5104 00:13:58.514 11:17:40 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:13:58.514 11:17:40 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ecdad1c3-e91a-4649-b6d6-8167739e9797 lbd_nest_0 5104 00:13:58.774 11:17:40 -- host/perf.sh@88 -- # lb_nested_guid=ffc6d956-84ef-4824-aaf6-a4736a3df0ad 00:13:58.774 11:17:40 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:59.032 11:17:40 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:13:59.032 11:17:40 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 ffc6d956-84ef-4824-aaf6-a4736a3df0ad 00:13:59.290 11:17:40 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.549 11:17:40 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:13:59.549 11:17:40 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:13:59.549 11:17:40 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:13:59.549 11:17:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:59.549 11:17:40 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:59.807 No valid NVMe controllers or AIO or URING devices found 00:13:59.807 Initializing NVMe Controllers 00:13:59.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:59.807 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:13:59.807 WARNING: Some requested NVMe devices were skipped 00:13:59.808 11:17:41 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:59.808 11:17:41 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:12.015 Initializing NVMe Controllers 00:14:12.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:12.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:12.015 Initialization complete. Launching workers. 00:14:12.015 ======================================================== 00:14:12.015 Latency(us) 00:14:12.015 Device Information : IOPS MiB/s Average min max 00:14:12.015 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 987.90 123.49 1011.89 291.03 8030.59 00:14:12.015 ======================================================== 00:14:12.015 Total : 987.90 123.49 1011.89 291.03 8030.59 00:14:12.015 00:14:12.015 11:17:51 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:12.015 11:17:51 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:12.015 11:17:51 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:12.015 No valid NVMe controllers or AIO or URING devices found 00:14:12.015 Initializing NVMe Controllers 00:14:12.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:12.015 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:12.015 WARNING: Some requested NVMe devices were skipped 00:14:12.015 11:17:51 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:12.015 11:17:51 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:21.988 Initializing NVMe Controllers 00:14:21.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:21.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:21.988 Initialization complete. Launching workers. 00:14:21.988 ======================================================== 00:14:21.988 Latency(us) 00:14:21.988 Device Information : IOPS MiB/s Average min max 00:14:21.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1345.80 168.22 23810.85 6181.39 55193.86 00:14:21.988 ======================================================== 00:14:21.988 Total : 1345.80 168.22 23810.85 6181.39 55193.86 00:14:21.988 00:14:21.988 11:18:02 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:21.988 11:18:02 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:21.988 11:18:02 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:21.988 No valid NVMe controllers or AIO or URING devices found 00:14:21.988 Initializing NVMe Controllers 00:14:21.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:21.988 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:21.988 WARNING: Some requested NVMe devices were skipped 00:14:21.988 11:18:02 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:21.988 11:18:02 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:31.965 Initializing NVMe Controllers 00:14:31.965 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:31.965 Controller IO queue size 128, less than required. 00:14:31.965 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:31.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:31.965 Initialization complete. Launching workers. 00:14:31.965 ======================================================== 00:14:31.965 Latency(us) 00:14:31.965 Device Information : IOPS MiB/s Average min max 00:14:31.965 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4039.18 504.90 31719.47 12923.34 60707.29 00:14:31.965 ======================================================== 00:14:31.965 Total : 4039.18 504.90 31719.47 12923.34 60707.29 00:14:31.965 00:14:31.965 11:18:12 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.965 11:18:13 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ffc6d956-84ef-4824-aaf6-a4736a3df0ad 00:14:31.965 11:18:13 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:14:32.224 11:18:13 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete db76cb49-40af-4671-9b3a-f59131487b10 00:14:32.483 11:18:14 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:14:32.742 11:18:14 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:32.742 11:18:14 -- host/perf.sh@114 -- # nvmftestfini 00:14:32.742 11:18:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:32.742 11:18:14 -- nvmf/common.sh@116 -- # sync 00:14:32.742 11:18:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:32.742 11:18:14 -- nvmf/common.sh@119 -- # set +e 00:14:32.742 11:18:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:32.742 11:18:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:32.742 rmmod nvme_tcp 00:14:32.742 rmmod nvme_fabrics 00:14:32.742 rmmod nvme_keyring 00:14:32.742 11:18:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:32.742 11:18:14 -- nvmf/common.sh@123 -- # set -e 00:14:32.742 11:18:14 -- nvmf/common.sh@124 -- # return 0 00:14:32.742 11:18:14 -- nvmf/common.sh@477 -- # '[' -n 68317 ']' 00:14:32.742 11:18:14 -- nvmf/common.sh@478 -- # killprocess 68317 00:14:32.742 11:18:14 -- common/autotest_common.sh@926 -- # '[' -z 68317 ']' 00:14:32.742 11:18:14 -- common/autotest_common.sh@930 -- # kill -0 68317 00:14:32.742 11:18:14 -- common/autotest_common.sh@931 -- # uname 00:14:32.742 11:18:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:32.742 11:18:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68317 00:14:32.742 killing process with pid 68317 00:14:32.742 11:18:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:32.742 11:18:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:32.742 11:18:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68317' 00:14:32.742 11:18:14 -- common/autotest_common.sh@945 -- # kill 68317 00:14:32.742 11:18:14 -- common/autotest_common.sh@950 -- # wait 68317 00:14:34.647 11:18:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:34.647 11:18:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:34.647 11:18:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:34.647 11:18:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.647 11:18:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:34.647 11:18:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.647 11:18:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.647 11:18:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.647 11:18:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:34.647 00:14:34.647 real 0m50.817s 00:14:34.647 user 3m10.782s 00:14:34.647 sys 0m13.085s 00:14:34.647 ************************************ 00:14:34.647 END TEST nvmf_perf 00:14:34.647 ************************************ 00:14:34.647 11:18:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.647 11:18:15 -- common/autotest_common.sh@10 -- # set +x 00:14:34.647 11:18:15 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:34.647 11:18:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:34.647 11:18:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:34.647 11:18:15 -- common/autotest_common.sh@10 -- # set +x 00:14:34.647 ************************************ 00:14:34.647 START TEST nvmf_fio_host 00:14:34.647 ************************************ 00:14:34.647 11:18:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:34.647 * Looking for test storage... 00:14:34.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:34.647 11:18:15 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.647 11:18:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.647 11:18:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.647 11:18:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.647 11:18:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.647 11:18:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.647 11:18:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.647 11:18:15 -- paths/export.sh@5 -- # export PATH 00:14:34.647 11:18:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.647 11:18:15 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:34.647 11:18:15 -- nvmf/common.sh@7 -- # uname -s 00:14:34.647 11:18:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.647 11:18:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.647 11:18:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.647 11:18:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.647 11:18:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.647 11:18:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.647 11:18:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.647 11:18:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.648 11:18:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.648 11:18:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.648 11:18:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:14:34.648 11:18:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:14:34.648 11:18:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.648 11:18:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.648 11:18:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:34.648 11:18:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.648 11:18:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.648 11:18:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.648 11:18:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.648 11:18:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.648 11:18:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.648 11:18:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.648 11:18:15 -- paths/export.sh@5 -- # export PATH 00:14:34.648 11:18:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.648 11:18:15 -- nvmf/common.sh@46 -- # : 0 00:14:34.648 11:18:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:34.648 11:18:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:34.648 11:18:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:34.648 11:18:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.648 11:18:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.648 11:18:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:34.648 11:18:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:34.648 11:18:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:34.648 11:18:15 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.648 11:18:15 -- host/fio.sh@14 -- # nvmftestinit 00:14:34.648 11:18:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:34.648 11:18:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.648 11:18:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:34.648 11:18:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:34.648 11:18:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:34.648 11:18:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.648 11:18:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.648 11:18:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.648 11:18:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:34.648 11:18:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:34.648 11:18:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:34.648 11:18:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:34.648 11:18:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:34.648 11:18:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:34.648 11:18:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.648 11:18:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:34.648 11:18:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:34.648 11:18:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:34.648 11:18:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:34.648 11:18:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:34.648 11:18:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:34.648 11:18:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.648 11:18:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:34.648 11:18:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:34.648 11:18:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:34.648 11:18:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:34.648 11:18:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:34.648 11:18:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:34.648 Cannot find device "nvmf_tgt_br" 00:14:34.648 11:18:16 -- nvmf/common.sh@154 -- # true 00:14:34.648 11:18:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.648 Cannot find device "nvmf_tgt_br2" 00:14:34.648 11:18:16 -- nvmf/common.sh@155 -- # true 00:14:34.648 11:18:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:34.648 11:18:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:34.648 Cannot find device "nvmf_tgt_br" 00:14:34.648 11:18:16 -- nvmf/common.sh@157 -- # true 00:14:34.648 11:18:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:34.648 Cannot find device "nvmf_tgt_br2" 00:14:34.648 11:18:16 -- nvmf/common.sh@158 -- # true 00:14:34.648 11:18:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:34.648 11:18:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:34.648 11:18:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:34.648 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:34.648 11:18:16 -- nvmf/common.sh@161 -- # true 00:14:34.648 11:18:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:34.648 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:34.648 11:18:16 -- nvmf/common.sh@162 -- # true 00:14:34.648 11:18:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:34.648 11:18:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:34.648 11:18:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:34.648 11:18:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:34.648 11:18:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:34.648 11:18:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:34.648 11:18:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:34.648 11:18:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:34.648 11:18:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:34.648 11:18:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:34.648 11:18:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:34.648 11:18:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:34.648 11:18:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:34.648 11:18:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:34.648 11:18:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:34.648 11:18:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:34.648 11:18:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:34.648 11:18:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:34.648 11:18:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:34.907 11:18:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:34.907 11:18:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:34.907 11:18:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:34.907 11:18:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:34.907 11:18:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:34.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:34.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:14:34.907 00:14:34.907 --- 10.0.0.2 ping statistics --- 00:14:34.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.907 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:14:34.907 11:18:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:34.907 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:34.907 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:14:34.907 00:14:34.907 --- 10.0.0.3 ping statistics --- 00:14:34.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.907 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:34.907 11:18:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:34.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:34.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:34.907 00:14:34.907 --- 10.0.0.1 ping statistics --- 00:14:34.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.907 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:34.907 11:18:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:34.907 11:18:16 -- nvmf/common.sh@421 -- # return 0 00:14:34.907 11:18:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:34.907 11:18:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:34.907 11:18:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:34.907 11:18:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:34.907 11:18:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:34.907 11:18:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:34.907 11:18:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:34.907 11:18:16 -- host/fio.sh@16 -- # [[ y != y ]] 00:14:34.907 11:18:16 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:34.907 11:18:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:34.907 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:14:34.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.907 11:18:16 -- host/fio.sh@24 -- # nvmfpid=69142 00:14:34.907 11:18:16 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:34.907 11:18:16 -- host/fio.sh@28 -- # waitforlisten 69142 00:14:34.907 11:18:16 -- common/autotest_common.sh@819 -- # '[' -z 69142 ']' 00:14:34.907 11:18:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.907 11:18:16 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:34.907 11:18:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:34.907 11:18:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.907 11:18:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:34.907 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:14:34.907 [2024-10-13 11:18:16.387993] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:34.907 [2024-10-13 11:18:16.388093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.165 [2024-10-13 11:18:16.524439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.165 [2024-10-13 11:18:16.594627] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:35.165 [2024-10-13 11:18:16.595027] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.165 [2024-10-13 11:18:16.595181] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.165 [2024-10-13 11:18:16.595422] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.165 [2024-10-13 11:18:16.595695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.166 [2024-10-13 11:18:16.595779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.166 [2024-10-13 11:18:16.595847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.166 [2024-10-13 11:18:16.595844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.101 11:18:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:36.101 11:18:17 -- common/autotest_common.sh@852 -- # return 0 00:14:36.101 11:18:17 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:36.101 [2024-10-13 11:18:17.652564] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.101 11:18:17 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:36.101 11:18:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:36.101 11:18:17 -- common/autotest_common.sh@10 -- # set +x 00:14:36.359 11:18:17 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:36.618 Malloc1 00:14:36.618 11:18:18 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:36.877 11:18:18 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:37.135 11:18:18 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.394 [2024-10-13 11:18:18.752780] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.394 11:18:18 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:37.653 11:18:18 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:37.653 11:18:18 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:37.653 11:18:18 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:37.653 11:18:18 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:14:37.653 11:18:18 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:37.653 11:18:18 -- common/autotest_common.sh@1318 -- # local sanitizers 00:14:37.653 11:18:18 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:37.653 11:18:18 -- common/autotest_common.sh@1320 -- # shift 00:14:37.653 11:18:18 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:14:37.653 11:18:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:37.653 11:18:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:37.653 11:18:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:14:37.653 11:18:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:37.653 11:18:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:37.653 11:18:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:37.653 11:18:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:37.653 11:18:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:37.653 11:18:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:14:37.653 11:18:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:37.653 11:18:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:37.653 11:18:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:37.653 11:18:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:37.653 11:18:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:37.653 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:37.653 fio-3.35 00:14:37.653 Starting 1 thread 00:14:40.202 00:14:40.202 test: (groupid=0, jobs=1): err= 0: pid=69224: Sun Oct 13 11:18:21 2024 00:14:40.202 read: IOPS=9259, BW=36.2MiB/s (37.9MB/s)(72.6MiB/2006msec) 00:14:40.202 slat (nsec): min=1921, max=376547, avg=2585.46, stdev=3661.71 00:14:40.202 clat (usec): min=2809, max=11817, avg=7188.29, stdev=550.47 00:14:40.202 lat (usec): min=2845, max=11819, avg=7190.88, stdev=550.27 00:14:40.202 clat percentiles (usec): 00:14:40.202 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6783], 00:14:40.202 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:14:40.202 | 70.00th=[ 7439], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 8094], 00:14:40.202 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[10290], 99.95th=[11469], 00:14:40.202 | 99.99th=[11731] 00:14:40.202 bw ( KiB/s): min=36400, max=37400, per=99.94%, avg=37016.00, stdev=437.35, samples=4 00:14:40.202 iops : min= 9100, max= 9350, avg=9254.00, stdev=109.34, samples=4 00:14:40.202 write: IOPS=9263, BW=36.2MiB/s (37.9MB/s)(72.6MiB/2006msec); 0 zone resets 00:14:40.202 slat (nsec): min=1974, max=267974, avg=2690.23, stdev=2609.95 00:14:40.202 clat (usec): min=2631, max=11791, avg=6570.36, stdev=497.17 00:14:40.202 lat (usec): min=2645, max=11793, avg=6573.05, stdev=497.04 00:14:40.202 clat percentiles (usec): 00:14:40.202 | 1.00th=[ 5473], 5.00th=[ 5866], 10.00th=[ 5997], 20.00th=[ 6194], 00:14:40.202 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6521], 60.00th=[ 6652], 00:14:40.202 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7177], 95.00th=[ 7373], 00:14:40.202 | 99.00th=[ 7832], 99.50th=[ 7963], 99.90th=[ 9896], 99.95th=[11076], 00:14:40.202 | 99.99th=[11731] 00:14:40.202 bw ( KiB/s): min=36568, max=37296, per=99.97%, avg=37042.00, stdev=332.58, samples=4 00:14:40.202 iops : min= 9142, max= 9324, avg=9260.50, stdev=83.14, samples=4 00:14:40.202 lat (msec) : 4=0.07%, 10=99.82%, 20=0.10% 00:14:40.202 cpu : usr=69.98%, sys=21.75%, ctx=8, majf=0, minf=5 00:14:40.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:40.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.202 issued rwts: total=18574,18582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.202 00:14:40.202 Run status group 0 (all jobs): 00:14:40.202 READ: bw=36.2MiB/s (37.9MB/s), 36.2MiB/s-36.2MiB/s (37.9MB/s-37.9MB/s), io=72.6MiB (76.1MB), run=2006-2006msec 00:14:40.202 WRITE: bw=36.2MiB/s (37.9MB/s), 36.2MiB/s-36.2MiB/s (37.9MB/s-37.9MB/s), io=72.6MiB (76.1MB), run=2006-2006msec 00:14:40.202 11:18:21 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:40.202 11:18:21 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:40.202 11:18:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:14:40.202 11:18:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:40.202 11:18:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:14:40.202 11:18:21 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:40.202 11:18:21 -- common/autotest_common.sh@1320 -- # shift 00:14:40.202 11:18:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:14:40.202 11:18:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:40.202 11:18:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:14:40.202 11:18:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:40.202 11:18:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:40.202 11:18:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:40.202 11:18:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:40.202 11:18:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:40.202 11:18:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:40.202 11:18:21 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:14:40.202 11:18:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:40.202 11:18:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:40.202 11:18:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:40.202 11:18:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:40.202 11:18:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:40.202 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:40.202 fio-3.35 00:14:40.202 Starting 1 thread 00:14:42.737 00:14:42.737 test: (groupid=0, jobs=1): err= 0: pid=69268: Sun Oct 13 11:18:23 2024 00:14:42.737 read: IOPS=8815, BW=138MiB/s (144MB/s)(276MiB/2006msec) 00:14:42.737 slat (usec): min=2, max=137, avg= 3.81, stdev= 2.57 00:14:42.737 clat (usec): min=1686, max=18736, avg=8082.54, stdev=2625.10 00:14:42.737 lat (usec): min=1690, max=18739, avg=8086.35, stdev=2625.24 00:14:42.737 clat percentiles (usec): 00:14:42.737 | 1.00th=[ 3851], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 5735], 00:14:42.737 | 30.00th=[ 6390], 40.00th=[ 7046], 50.00th=[ 7701], 60.00th=[ 8455], 00:14:42.737 | 70.00th=[ 9110], 80.00th=[10290], 90.00th=[11469], 95.00th=[13173], 00:14:42.737 | 99.00th=[16057], 99.50th=[16712], 99.90th=[18220], 99.95th=[18744], 00:14:42.737 | 99.99th=[18744] 00:14:42.737 bw ( KiB/s): min=64384, max=74880, per=50.43%, avg=71136.00, stdev=4623.82, samples=4 00:14:42.737 iops : min= 4024, max= 4680, avg=4446.00, stdev=288.99, samples=4 00:14:42.737 write: IOPS=5199, BW=81.2MiB/s (85.2MB/s)(145MiB/1784msec); 0 zone resets 00:14:42.737 slat (usec): min=32, max=348, avg=38.00, stdev= 9.61 00:14:42.737 clat (usec): min=2928, max=19393, avg=11373.84, stdev=1877.02 00:14:42.737 lat (usec): min=2962, max=19426, avg=11411.83, stdev=1876.97 00:14:42.737 clat percentiles (usec): 00:14:42.737 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:14:42.737 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[11731], 00:14:42.737 | 70.00th=[12256], 80.00th=[12780], 90.00th=[13829], 95.00th=[14746], 00:14:42.737 | 99.00th=[16188], 99.50th=[16581], 99.90th=[18220], 99.95th=[18482], 00:14:42.737 | 99.99th=[19268] 00:14:42.737 bw ( KiB/s): min=67264, max=78272, per=89.20%, avg=74208.00, stdev=4883.19, samples=4 00:14:42.737 iops : min= 4204, max= 4892, avg=4638.00, stdev=305.20, samples=4 00:14:42.737 lat (msec) : 2=0.03%, 4=0.96%, 10=57.81%, 20=41.20% 00:14:42.737 cpu : usr=80.90%, sys=13.87%, ctx=3, majf=0, minf=6 00:14:42.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:14:42.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:42.737 issued rwts: total=17684,9276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:42.737 00:14:42.737 Run status group 0 (all jobs): 00:14:42.737 READ: bw=138MiB/s (144MB/s), 138MiB/s-138MiB/s (144MB/s-144MB/s), io=276MiB (290MB), run=2006-2006msec 00:14:42.737 WRITE: bw=81.2MiB/s (85.2MB/s), 81.2MiB/s-81.2MiB/s (85.2MB/s-85.2MB/s), io=145MiB (152MB), run=1784-1784msec 00:14:42.737 11:18:23 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.737 11:18:24 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:14:42.737 11:18:24 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:14:42.737 11:18:24 -- host/fio.sh@51 -- # get_nvme_bdfs 00:14:42.737 11:18:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:42.737 11:18:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:14:42.737 11:18:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:42.737 11:18:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:42.737 11:18:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:42.737 11:18:24 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:14:42.737 11:18:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:14:42.737 11:18:24 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:14:43.305 Nvme0n1 00:14:43.305 11:18:24 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:14:43.563 11:18:24 -- host/fio.sh@53 -- # ls_guid=b1672a21-ee31-4162-8ba2-68b8b2a3e4cd 00:14:43.563 11:18:24 -- host/fio.sh@54 -- # get_lvs_free_mb b1672a21-ee31-4162-8ba2-68b8b2a3e4cd 00:14:43.563 11:18:24 -- common/autotest_common.sh@1343 -- # local lvs_uuid=b1672a21-ee31-4162-8ba2-68b8b2a3e4cd 00:14:43.563 11:18:24 -- common/autotest_common.sh@1344 -- # local lvs_info 00:14:43.563 11:18:24 -- common/autotest_common.sh@1345 -- # local fc 00:14:43.563 11:18:24 -- common/autotest_common.sh@1346 -- # local cs 00:14:43.563 11:18:24 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:43.822 11:18:25 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:14:43.822 { 00:14:43.822 "uuid": "b1672a21-ee31-4162-8ba2-68b8b2a3e4cd", 00:14:43.822 "name": "lvs_0", 00:14:43.822 "base_bdev": "Nvme0n1", 00:14:43.822 "total_data_clusters": 4, 00:14:43.822 "free_clusters": 4, 00:14:43.822 "block_size": 4096, 00:14:43.822 "cluster_size": 1073741824 00:14:43.822 } 00:14:43.822 ]' 00:14:43.822 11:18:25 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="b1672a21-ee31-4162-8ba2-68b8b2a3e4cd") .free_clusters' 00:14:43.822 11:18:25 -- common/autotest_common.sh@1348 -- # fc=4 00:14:43.822 11:18:25 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="b1672a21-ee31-4162-8ba2-68b8b2a3e4cd") .cluster_size' 00:14:43.822 4096 00:14:43.822 11:18:25 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:14:43.822 11:18:25 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:14:43.822 11:18:25 -- common/autotest_common.sh@1353 -- # echo 4096 00:14:43.822 11:18:25 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:14:44.081 8c57be97-4562-4c49-8ecd-d68260f6caec 00:14:44.081 11:18:25 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:14:44.340 11:18:25 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:14:44.600 11:18:26 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:44.859 11:18:26 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:44.859 11:18:26 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:44.859 11:18:26 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:14:44.859 11:18:26 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:44.859 11:18:26 -- common/autotest_common.sh@1318 -- # local sanitizers 00:14:44.859 11:18:26 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:44.859 11:18:26 -- common/autotest_common.sh@1320 -- # shift 00:14:44.859 11:18:26 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:14:44.859 11:18:26 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:44.859 11:18:26 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:44.859 11:18:26 -- common/autotest_common.sh@1324 -- # grep libasan 00:14:44.859 11:18:26 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:44.859 11:18:26 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:44.859 11:18:26 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:44.859 11:18:26 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:44.859 11:18:26 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:44.859 11:18:26 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:44.859 11:18:26 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:14:44.859 11:18:26 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:44.859 11:18:26 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:44.859 11:18:26 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:44.859 11:18:26 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:44.859 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:44.859 fio-3.35 00:14:44.859 Starting 1 thread 00:14:47.412 00:14:47.412 test: (groupid=0, jobs=1): err= 0: pid=69378: Sun Oct 13 11:18:28 2024 00:14:47.412 read: IOPS=6514, BW=25.4MiB/s (26.7MB/s)(51.1MiB/2008msec) 00:14:47.412 slat (nsec): min=1932, max=261621, avg=2704.25, stdev=3317.39 00:14:47.412 clat (usec): min=2939, max=17783, avg=10256.77, stdev=856.99 00:14:47.412 lat (usec): min=2947, max=17786, avg=10259.47, stdev=856.74 00:14:47.412 clat percentiles (usec): 00:14:47.412 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:14:47.412 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:14:47.412 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11600], 00:14:47.412 | 99.00th=[12125], 99.50th=[12518], 99.90th=[16581], 99.95th=[16909], 00:14:47.412 | 99.99th=[17695] 00:14:47.412 bw ( KiB/s): min=24808, max=26928, per=99.89%, avg=26032.00, stdev=898.07, samples=4 00:14:47.412 iops : min= 6202, max= 6732, avg=6508.00, stdev=224.52, samples=4 00:14:47.412 write: IOPS=6524, BW=25.5MiB/s (26.7MB/s)(51.2MiB/2008msec); 0 zone resets 00:14:47.412 slat (nsec): min=1997, max=195950, avg=2847.43, stdev=2514.70 00:14:47.413 clat (usec): min=1973, max=16757, avg=9305.33, stdev=796.77 00:14:47.413 lat (usec): min=1985, max=16760, avg=9308.18, stdev=796.67 00:14:47.413 clat percentiles (usec): 00:14:47.413 | 1.00th=[ 7570], 5.00th=[ 8160], 10.00th=[ 8356], 20.00th=[ 8717], 00:14:47.413 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:14:47.413 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:14:47.413 | 99.00th=[11076], 99.50th=[11338], 99.90th=[14615], 99.95th=[15795], 00:14:47.413 | 99.99th=[16712] 00:14:47.413 bw ( KiB/s): min=25920, max=26304, per=99.93%, avg=26082.00, stdev=160.58, samples=4 00:14:47.413 iops : min= 6480, max= 6576, avg=6520.50, stdev=40.15, samples=4 00:14:47.413 lat (msec) : 2=0.01%, 4=0.06%, 10=60.26%, 20=39.68% 00:14:47.413 cpu : usr=73.54%, sys=20.43%, ctx=17, majf=0, minf=14 00:14:47.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:47.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:47.413 issued rwts: total=13082,13102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:47.413 00:14:47.413 Run status group 0 (all jobs): 00:14:47.413 READ: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=51.1MiB (53.6MB), run=2008-2008msec 00:14:47.413 WRITE: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=51.2MiB (53.7MB), run=2008-2008msec 00:14:47.413 11:18:28 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:47.413 11:18:28 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:14:47.672 11:18:29 -- host/fio.sh@64 -- # ls_nested_guid=ad9f5999-279e-4dc1-a64e-26c1760920fc 00:14:47.672 11:18:29 -- host/fio.sh@65 -- # get_lvs_free_mb ad9f5999-279e-4dc1-a64e-26c1760920fc 00:14:47.672 11:18:29 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ad9f5999-279e-4dc1-a64e-26c1760920fc 00:14:47.672 11:18:29 -- common/autotest_common.sh@1344 -- # local lvs_info 00:14:47.672 11:18:29 -- common/autotest_common.sh@1345 -- # local fc 00:14:47.672 11:18:29 -- common/autotest_common.sh@1346 -- # local cs 00:14:47.672 11:18:29 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:48.240 11:18:29 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:14:48.240 { 00:14:48.240 "uuid": "b1672a21-ee31-4162-8ba2-68b8b2a3e4cd", 00:14:48.240 "name": "lvs_0", 00:14:48.240 "base_bdev": "Nvme0n1", 00:14:48.240 "total_data_clusters": 4, 00:14:48.240 "free_clusters": 0, 00:14:48.240 "block_size": 4096, 00:14:48.240 "cluster_size": 1073741824 00:14:48.240 }, 00:14:48.240 { 00:14:48.240 "uuid": "ad9f5999-279e-4dc1-a64e-26c1760920fc", 00:14:48.240 "name": "lvs_n_0", 00:14:48.240 "base_bdev": "8c57be97-4562-4c49-8ecd-d68260f6caec", 00:14:48.240 "total_data_clusters": 1022, 00:14:48.240 "free_clusters": 1022, 00:14:48.240 "block_size": 4096, 00:14:48.240 "cluster_size": 4194304 00:14:48.240 } 00:14:48.240 ]' 00:14:48.240 11:18:29 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ad9f5999-279e-4dc1-a64e-26c1760920fc") .free_clusters' 00:14:48.240 11:18:29 -- common/autotest_common.sh@1348 -- # fc=1022 00:14:48.240 11:18:29 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ad9f5999-279e-4dc1-a64e-26c1760920fc") .cluster_size' 00:14:48.240 4088 00:14:48.240 11:18:29 -- common/autotest_common.sh@1349 -- # cs=4194304 00:14:48.240 11:18:29 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:14:48.240 11:18:29 -- common/autotest_common.sh@1353 -- # echo 4088 00:14:48.240 11:18:29 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:14:48.240 07514f81-df7c-4697-9f00-eefe50dd916c 00:14:48.499 11:18:29 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:14:48.499 11:18:30 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:14:48.758 11:18:30 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:49.017 11:18:30 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:49.017 11:18:30 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:49.017 11:18:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:14:49.017 11:18:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:49.017 11:18:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:14:49.017 11:18:30 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:49.017 11:18:30 -- common/autotest_common.sh@1320 -- # shift 00:14:49.017 11:18:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:14:49.017 11:18:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:49.017 11:18:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:49.017 11:18:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:14:49.017 11:18:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:49.017 11:18:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:49.017 11:18:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:49.017 11:18:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:49.017 11:18:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:49.017 11:18:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:14:49.017 11:18:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:49.017 11:18:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:49.017 11:18:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:49.017 11:18:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:49.017 11:18:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:49.276 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:49.276 fio-3.35 00:14:49.276 Starting 1 thread 00:14:51.809 00:14:51.809 test: (groupid=0, jobs=1): err= 0: pid=69456: Sun Oct 13 11:18:33 2024 00:14:51.809 read: IOPS=5841, BW=22.8MiB/s (23.9MB/s)(45.8MiB/2009msec) 00:14:51.809 slat (nsec): min=1932, max=288790, avg=2486.63, stdev=3521.32 00:14:51.809 clat (usec): min=3207, max=19846, avg=11469.62, stdev=968.33 00:14:51.809 lat (usec): min=3216, max=19848, avg=11472.11, stdev=968.04 00:14:51.809 clat percentiles (usec): 00:14:51.809 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:14:51.809 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:14:51.809 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12649], 95.00th=[12911], 00:14:51.809 | 99.00th=[13566], 99.50th=[14091], 99.90th=[18482], 99.95th=[19530], 00:14:51.809 | 99.99th=[19792] 00:14:51.809 bw ( KiB/s): min=22464, max=23776, per=99.89%, avg=23342.00, stdev=607.31, samples=4 00:14:51.809 iops : min= 5616, max= 5944, avg=5835.50, stdev=151.83, samples=4 00:14:51.809 write: IOPS=5829, BW=22.8MiB/s (23.9MB/s)(45.7MiB/2009msec); 0 zone resets 00:14:51.809 slat (nsec): min=1995, max=244237, avg=2636.30, stdev=2869.74 00:14:51.809 clat (usec): min=2194, max=18534, avg=10366.51, stdev=907.89 00:14:51.809 lat (usec): min=2206, max=18536, avg=10369.14, stdev=907.74 00:14:51.809 clat percentiles (usec): 00:14:51.809 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:14:51.809 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:14:51.809 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:14:51.809 | 99.00th=[12387], 99.50th=[12649], 99.90th=[17171], 99.95th=[18220], 00:14:51.809 | 99.99th=[18482] 00:14:51.809 bw ( KiB/s): min=23232, max=23360, per=99.92%, avg=23298.00, stdev=52.41, samples=4 00:14:51.809 iops : min= 5808, max= 5840, avg=5824.50, stdev=13.10, samples=4 00:14:51.809 lat (msec) : 4=0.06%, 10=18.64%, 20=81.30% 00:14:51.809 cpu : usr=75.70%, sys=19.17%, ctx=20, majf=0, minf=14 00:14:51.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:14:51.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:51.809 issued rwts: total=11736,11711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:51.809 00:14:51.809 Run status group 0 (all jobs): 00:14:51.809 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.8MiB (48.1MB), run=2009-2009msec 00:14:51.809 WRITE: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.7MiB (48.0MB), run=2009-2009msec 00:14:51.809 11:18:33 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:51.809 11:18:33 -- host/fio.sh@74 -- # sync 00:14:51.809 11:18:33 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:14:52.068 11:18:33 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:14:52.327 11:18:33 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:14:52.585 11:18:34 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:14:52.843 11:18:34 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:53.779 11:18:35 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:53.779 11:18:35 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:53.779 11:18:35 -- host/fio.sh@86 -- # nvmftestfini 00:14:53.779 11:18:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:53.779 11:18:35 -- nvmf/common.sh@116 -- # sync 00:14:53.779 11:18:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:53.779 11:18:35 -- nvmf/common.sh@119 -- # set +e 00:14:53.779 11:18:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:53.779 11:18:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:53.779 rmmod nvme_tcp 00:14:53.779 rmmod nvme_fabrics 00:14:53.779 rmmod nvme_keyring 00:14:53.779 11:18:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:53.779 11:18:35 -- nvmf/common.sh@123 -- # set -e 00:14:53.779 11:18:35 -- nvmf/common.sh@124 -- # return 0 00:14:53.779 11:18:35 -- nvmf/common.sh@477 -- # '[' -n 69142 ']' 00:14:53.779 11:18:35 -- nvmf/common.sh@478 -- # killprocess 69142 00:14:53.779 11:18:35 -- common/autotest_common.sh@926 -- # '[' -z 69142 ']' 00:14:53.779 11:18:35 -- common/autotest_common.sh@930 -- # kill -0 69142 00:14:53.779 11:18:35 -- common/autotest_common.sh@931 -- # uname 00:14:53.779 11:18:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:53.779 11:18:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69142 00:14:53.779 killing process with pid 69142 00:14:53.779 11:18:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:53.779 11:18:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:53.779 11:18:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69142' 00:14:53.779 11:18:35 -- common/autotest_common.sh@945 -- # kill 69142 00:14:53.779 11:18:35 -- common/autotest_common.sh@950 -- # wait 69142 00:14:54.038 11:18:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:54.038 11:18:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:54.038 11:18:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:54.038 11:18:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:54.038 11:18:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:54.038 11:18:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.038 11:18:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.038 11:18:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.038 11:18:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:54.038 00:14:54.038 real 0m19.615s 00:14:54.038 user 1m26.790s 00:14:54.038 sys 0m4.289s 00:14:54.038 11:18:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.038 ************************************ 00:14:54.038 END TEST nvmf_fio_host 00:14:54.038 11:18:35 -- common/autotest_common.sh@10 -- # set +x 00:14:54.038 ************************************ 00:14:54.038 11:18:35 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:54.038 11:18:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:54.038 11:18:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:54.038 11:18:35 -- common/autotest_common.sh@10 -- # set +x 00:14:54.038 ************************************ 00:14:54.038 START TEST nvmf_failover 00:14:54.038 ************************************ 00:14:54.038 11:18:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:54.038 * Looking for test storage... 00:14:54.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:54.038 11:18:35 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:54.038 11:18:35 -- nvmf/common.sh@7 -- # uname -s 00:14:54.038 11:18:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.038 11:18:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.038 11:18:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.038 11:18:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.038 11:18:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.038 11:18:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.038 11:18:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.038 11:18:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.038 11:18:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.038 11:18:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.038 11:18:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:14:54.038 11:18:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:14:54.038 11:18:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.038 11:18:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.299 11:18:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:54.299 11:18:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:54.299 11:18:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.299 11:18:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.299 11:18:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.299 11:18:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.299 11:18:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.299 11:18:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.299 11:18:35 -- paths/export.sh@5 -- # export PATH 00:14:54.299 11:18:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.299 11:18:35 -- nvmf/common.sh@46 -- # : 0 00:14:54.299 11:18:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:54.299 11:18:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:54.299 11:18:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:54.299 11:18:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.299 11:18:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.299 11:18:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:54.299 11:18:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:54.299 11:18:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:54.299 11:18:35 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:54.299 11:18:35 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:54.299 11:18:35 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:54.299 11:18:35 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:54.299 11:18:35 -- host/failover.sh@18 -- # nvmftestinit 00:14:54.299 11:18:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:54.299 11:18:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.299 11:18:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:54.299 11:18:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:54.299 11:18:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:54.299 11:18:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.299 11:18:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.299 11:18:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.299 11:18:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:54.299 11:18:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:54.299 11:18:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:54.299 11:18:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:54.299 11:18:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:54.299 11:18:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:54.299 11:18:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.299 11:18:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.299 11:18:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:54.299 11:18:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:54.299 11:18:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:54.299 11:18:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:54.299 11:18:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:54.299 11:18:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.299 11:18:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:54.299 11:18:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:54.299 11:18:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:54.299 11:18:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:54.299 11:18:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:54.299 11:18:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:54.299 Cannot find device "nvmf_tgt_br" 00:14:54.299 11:18:35 -- nvmf/common.sh@154 -- # true 00:14:54.299 11:18:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:54.299 Cannot find device "nvmf_tgt_br2" 00:14:54.299 11:18:35 -- nvmf/common.sh@155 -- # true 00:14:54.299 11:18:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:54.299 11:18:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:54.299 Cannot find device "nvmf_tgt_br" 00:14:54.299 11:18:35 -- nvmf/common.sh@157 -- # true 00:14:54.299 11:18:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:54.299 Cannot find device "nvmf_tgt_br2" 00:14:54.299 11:18:35 -- nvmf/common.sh@158 -- # true 00:14:54.299 11:18:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:54.299 11:18:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:54.299 11:18:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.299 11:18:35 -- nvmf/common.sh@161 -- # true 00:14:54.299 11:18:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.299 11:18:35 -- nvmf/common.sh@162 -- # true 00:14:54.299 11:18:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:54.299 11:18:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:54.299 11:18:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:54.299 11:18:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:54.299 11:18:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:54.299 11:18:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:54.299 11:18:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:54.299 11:18:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:54.299 11:18:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:54.299 11:18:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:54.299 11:18:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:54.299 11:18:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:54.299 11:18:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:54.299 11:18:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:54.299 11:18:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:54.299 11:18:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:54.299 11:18:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:54.299 11:18:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:54.299 11:18:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:54.563 11:18:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:54.563 11:18:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:54.563 11:18:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:54.563 11:18:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:54.563 11:18:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:54.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:14:54.563 00:14:54.563 --- 10.0.0.2 ping statistics --- 00:14:54.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.563 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:54.563 11:18:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:54.563 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:54.563 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:14:54.563 00:14:54.563 --- 10.0.0.3 ping statistics --- 00:14:54.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.563 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:54.563 11:18:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:54.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:54.563 00:14:54.563 --- 10.0.0.1 ping statistics --- 00:14:54.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.563 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:54.563 11:18:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.563 11:18:35 -- nvmf/common.sh@421 -- # return 0 00:14:54.563 11:18:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:54.563 11:18:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.563 11:18:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:54.563 11:18:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:54.563 11:18:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.563 11:18:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:54.563 11:18:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:54.563 11:18:35 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:54.563 11:18:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:54.563 11:18:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:54.563 11:18:35 -- common/autotest_common.sh@10 -- # set +x 00:14:54.563 11:18:35 -- nvmf/common.sh@469 -- # nvmfpid=69697 00:14:54.563 11:18:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:54.563 11:18:35 -- nvmf/common.sh@470 -- # waitforlisten 69697 00:14:54.563 11:18:35 -- common/autotest_common.sh@819 -- # '[' -z 69697 ']' 00:14:54.563 11:18:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.563 11:18:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:54.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.563 11:18:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.563 11:18:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:54.563 11:18:35 -- common/autotest_common.sh@10 -- # set +x 00:14:54.563 [2024-10-13 11:18:36.029330] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:54.563 [2024-10-13 11:18:36.029471] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.822 [2024-10-13 11:18:36.161566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:54.822 [2024-10-13 11:18:36.217906] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:54.822 [2024-10-13 11:18:36.218313] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.822 [2024-10-13 11:18:36.218424] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.822 [2024-10-13 11:18:36.218586] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.822 [2024-10-13 11:18:36.218769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.822 [2024-10-13 11:18:36.219383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.822 [2024-10-13 11:18:36.219385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.757 11:18:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:55.757 11:18:36 -- common/autotest_common.sh@852 -- # return 0 00:14:55.757 11:18:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:55.757 11:18:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:55.757 11:18:36 -- common/autotest_common.sh@10 -- # set +x 00:14:55.757 11:18:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.757 11:18:37 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:55.757 [2024-10-13 11:18:37.256189] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.757 11:18:37 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:56.016 Malloc0 00:14:56.016 11:18:37 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:56.275 11:18:37 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:56.536 11:18:38 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.799 [2024-10-13 11:18:38.297610] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.799 11:18:38 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:57.057 [2024-10-13 11:18:38.529825] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:57.057 11:18:38 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:57.316 [2024-10-13 11:18:38.794066] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:57.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:57.316 11:18:38 -- host/failover.sh@31 -- # bdevperf_pid=69755 00:14:57.316 11:18:38 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:57.316 11:18:38 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:57.316 11:18:38 -- host/failover.sh@34 -- # waitforlisten 69755 /var/tmp/bdevperf.sock 00:14:57.316 11:18:38 -- common/autotest_common.sh@819 -- # '[' -z 69755 ']' 00:14:57.316 11:18:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:57.317 11:18:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:57.317 11:18:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:57.317 11:18:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:57.317 11:18:38 -- common/autotest_common.sh@10 -- # set +x 00:14:58.253 11:18:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:58.253 11:18:39 -- common/autotest_common.sh@852 -- # return 0 00:14:58.253 11:18:39 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:58.512 NVMe0n1 00:14:58.512 11:18:40 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:59.079 00:14:59.079 11:18:40 -- host/failover.sh@39 -- # run_test_pid=69777 00:14:59.079 11:18:40 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:59.079 11:18:40 -- host/failover.sh@41 -- # sleep 1 00:15:00.015 11:18:41 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.273 [2024-10-13 11:18:41.648101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fd00 is same with the state(5) to be set 00:15:00.273 [2024-10-13 11:18:41.648160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fd00 is same with the state(5) to be set 00:15:00.273 [2024-10-13 11:18:41.648188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fd00 is same with the state(5) to be set 00:15:00.274 [2024-10-13 11:18:41.648196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fd00 is same with the state(5) to be set 00:15:00.274 [2024-10-13 11:18:41.648204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fd00 is same with the state(5) to be set 00:15:00.274 [2024-10-13 11:18:41.648211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fd00 is same with the state(5) to be set 00:15:00.274 [2024-10-13 11:18:41.648219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fd00 is same with the state(5) to be set 00:15:00.274 [2024-10-13 11:18:41.648226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fd00 is same with the state(5) to be set 00:15:00.274 [2024-10-13 11:18:41.648234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fd00 is same with the state(5) to be set 00:15:00.274 11:18:41 -- host/failover.sh@45 -- # sleep 3 00:15:03.560 11:18:44 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:03.560 00:15:03.560 11:18:45 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:03.819 [2024-10-13 11:18:45.225226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 [2024-10-13 11:18:45.225284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 [2024-10-13 11:18:45.225312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 [2024-10-13 11:18:45.225320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 [2024-10-13 11:18:45.225328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 [2024-10-13 11:18:45.225353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 [2024-10-13 11:18:45.225381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 [2024-10-13 11:18:45.225390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 [2024-10-13 11:18:45.225398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 [2024-10-13 11:18:45.225406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 [2024-10-13 11:18:45.225414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 [2024-10-13 11:18:45.225422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 [2024-10-13 11:18:45.225447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 [2024-10-13 11:18:45.225456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 [2024-10-13 11:18:45.225464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13803c0 is same with the state(5) to be set 00:15:03.819 11:18:45 -- host/failover.sh@50 -- # sleep 3 00:15:07.105 11:18:48 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.105 [2024-10-13 11:18:48.493040] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.105 11:18:48 -- host/failover.sh@55 -- # sleep 1 00:15:08.039 11:18:49 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:08.297 [2024-10-13 11:18:49.778327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 [2024-10-13 11:18:49.778813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e9f0 is same with the state(5) to be set 00:15:08.298 11:18:49 -- host/failover.sh@59 -- # wait 69777 00:15:14.890 0 00:15:14.890 11:18:55 -- host/failover.sh@61 -- # killprocess 69755 00:15:14.890 11:18:55 -- common/autotest_common.sh@926 -- # '[' -z 69755 ']' 00:15:14.890 11:18:55 -- common/autotest_common.sh@930 -- # kill -0 69755 00:15:14.890 11:18:55 -- common/autotest_common.sh@931 -- # uname 00:15:14.890 11:18:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:14.890 11:18:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69755 00:15:14.890 killing process with pid 69755 00:15:14.890 11:18:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:14.890 11:18:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:14.890 11:18:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69755' 00:15:14.890 11:18:55 -- common/autotest_common.sh@945 -- # kill 69755 00:15:14.890 11:18:55 -- common/autotest_common.sh@950 -- # wait 69755 00:15:14.890 11:18:55 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:14.890 [2024-10-13 11:18:38.857579] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:14.890 [2024-10-13 11:18:38.857682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69755 ] 00:15:14.890 [2024-10-13 11:18:38.994457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.890 [2024-10-13 11:18:39.062904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.890 Running I/O for 15 seconds... 00:15:14.890 [2024-10-13 11:18:41.648293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.648388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.648437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.648484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.648528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.648556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.648583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.648611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.648639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.648666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.648729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.890 [2024-10-13 11:18:41.648759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.648888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.648918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.890 [2024-10-13 11:18:41.648949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.648980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.648997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.649023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.649043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.649059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.649075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.649090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.649106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.649120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.649170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.649185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.649200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.649212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.649227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.890 [2024-10-13 11:18:41.649255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.649270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.890 [2024-10-13 11:18:41.649293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.890 [2024-10-13 11:18:41.649307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.649341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.649381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.649413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.649440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.649468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.649495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.649523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.649550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.649578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.649606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.649634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.649676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.649702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.649755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.649782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.649810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.649838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.649865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.649892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.649919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.649946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.649973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.649988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.650001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.650028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.650056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.650106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.650148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.650175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.650202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.650230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.650257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.650284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.650311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.650349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.650393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.650435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.650465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.650496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.650533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.650563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.891 [2024-10-13 11:18:41.650592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.891 [2024-10-13 11:18:41.650607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.891 [2024-10-13 11:18:41.650622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.650637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.650678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.650698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.892 [2024-10-13 11:18:41.650712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.650728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.650742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.650758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.650772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.650788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.650803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.650818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.650833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.650849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.650863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.650879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.650893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.650908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.650923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.650951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.650966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.650993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.892 [2024-10-13 11:18:41.651026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.892 [2024-10-13 11:18:41.651072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.892 [2024-10-13 11:18:41.651198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.892 [2024-10-13 11:18:41.651280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.892 [2024-10-13 11:18:41.651350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.892 [2024-10-13 11:18:41.651385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.892 [2024-10-13 11:18:41.651426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.892 [2024-10-13 11:18:41.651542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.892 [2024-10-13 11:18:41.651571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.892 [2024-10-13 11:18:41.651874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.892 [2024-10-13 11:18:41.651929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.892 [2024-10-13 11:18:41.651956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.892 [2024-10-13 11:18:41.651970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.892 [2024-10-13 11:18:41.651983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.651997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.893 [2024-10-13 11:18:41.652012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.893 [2024-10-13 11:18:41.652040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.893 [2024-10-13 11:18:41.652067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.893 [2024-10-13 11:18:41.652095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:41.652128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.893 [2024-10-13 11:18:41.652156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.893 [2024-10-13 11:18:41.652183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:41.652210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.893 [2024-10-13 11:18:41.652237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.893 [2024-10-13 11:18:41.652264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.893 [2024-10-13 11:18:41.652292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:41.652319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:41.652382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:41.652411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:41.652440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:41.652468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:41.652499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:41.652537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:41.652565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:41.652593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821970 is same with the state(5) to be set 00:15:14.893 [2024-10-13 11:18:41.652625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.893 [2024-10-13 11:18:41.652635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.893 [2024-10-13 11:18:41.652646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121096 len:8 PRP1 0x0 PRP2 0x0 00:15:14.893 [2024-10-13 11:18:41.652659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652706] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x821970 was disconnected and freed. reset controller. 00:15:14.893 [2024-10-13 11:18:41.652725] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:14.893 [2024-10-13 11:18:41.652795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.893 [2024-10-13 11:18:41.652833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.893 [2024-10-13 11:18:41.652861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.893 [2024-10-13 11:18:41.652888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.893 [2024-10-13 11:18:41.652915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:41.652928] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:14.893 [2024-10-13 11:18:41.655383] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:14.893 [2024-10-13 11:18:41.655432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7be690 (9): Bad file descriptor 00:15:14.893 [2024-10-13 11:18:41.692178] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:14.893 [2024-10-13 11:18:45.225526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:45.225582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:45.225632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:45.225661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:45.225678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:45.225693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:45.225709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:45.225723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:45.225739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:45.225753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:45.225769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:45.225783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:45.225799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:45.225813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:45.225829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:45.225844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:45.225860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:45.225873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:45.225889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:45.225903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.893 [2024-10-13 11:18:45.225920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.893 [2024-10-13 11:18:45.225934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.225950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.225964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.225981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.225995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.894 [2024-10-13 11:18:45.226139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.894 [2024-10-13 11:18:45.226198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.894 [2024-10-13 11:18:45.226325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.894 [2024-10-13 11:18:45.226353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.894 [2024-10-13 11:18:45.226463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.894 [2024-10-13 11:18:45.226492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.894 [2024-10-13 11:18:45.226521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.894 [2024-10-13 11:18:45.226549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.894 [2024-10-13 11:18:45.226802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.894 [2024-10-13 11:18:45.226816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.226833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.226847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.226875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.226891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.226907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.226920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.226936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.226950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.226966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.895 [2024-10-13 11:18:45.226980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.226996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.895 [2024-10-13 11:18:45.227068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.895 [2024-10-13 11:18:45.227126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.895 [2024-10-13 11:18:45.227183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.895 [2024-10-13 11:18:45.227211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.895 [2024-10-13 11:18:45.227240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.895 [2024-10-13 11:18:45.227274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.895 [2024-10-13 11:18:45.227303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.895 [2024-10-13 11:18:45.227751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.895 [2024-10-13 11:18:45.227823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.895 [2024-10-13 11:18:45.227852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.895 [2024-10-13 11:18:45.227908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.227978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.227991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.228006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.228020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.228034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.228054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.228070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.228084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.228100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.895 [2024-10-13 11:18:45.228113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.895 [2024-10-13 11:18:45.228128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.228169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.228197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.228224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.228281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.228310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.228378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.228406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.228442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.228471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.228527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.228584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.228844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.228901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.228929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.228976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.228989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.229004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.229018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.229033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.229046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.229060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.229073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.229088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.229102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.229118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.229131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.229146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.229166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.229181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.229194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.229209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.896 [2024-10-13 11:18:45.229222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.229254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.229267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.229283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.229296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.229311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.229325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.229350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.896 [2024-10-13 11:18:45.229366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.896 [2024-10-13 11:18:45.229382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:45.229396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:45.229411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:45.229425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:45.229440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:45.229456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:45.229472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:45.229486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:45.229502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:45.229515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:45.229532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:45.229545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:45.229568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x806450 is same with the state(5) to be set 00:15:14.897 [2024-10-13 11:18:45.229585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.897 [2024-10-13 11:18:45.229596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.897 [2024-10-13 11:18:45.229614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114088 len:8 PRP1 0x0 PRP2 0x0 00:15:14.897 [2024-10-13 11:18:45.229628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:45.229675] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x806450 was disconnected and freed. reset controller. 00:15:14.897 [2024-10-13 11:18:45.229695] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:14.897 [2024-10-13 11:18:45.229750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.897 [2024-10-13 11:18:45.229773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:45.229789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.897 [2024-10-13 11:18:45.229802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:45.229816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.897 [2024-10-13 11:18:45.229830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:45.229844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.897 [2024-10-13 11:18:45.229857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:45.229871] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:14.897 [2024-10-13 11:18:45.229903] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7be690 (9): Bad file descriptor 00:15:14.897 [2024-10-13 11:18:45.232541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:14.897 [2024-10-13 11:18:45.264752] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:14.897 [2024-10-13 11:18:49.778877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.778933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.778962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.778979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.897 [2024-10-13 11:18:49.779663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.897 [2024-10-13 11:18:49.779730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.897 [2024-10-13 11:18:49.779848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.897 [2024-10-13 11:18:49.779864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.898 [2024-10-13 11:18:49.779879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.779895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.898 [2024-10-13 11:18:49.779909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.779925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.779939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.779966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.779982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.779999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.898 [2024-10-13 11:18:49.780293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.898 [2024-10-13 11:18:49.780336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.898 [2024-10-13 11:18:49.780364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.898 [2024-10-13 11:18:49.780524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.898 [2024-10-13 11:18:49.780583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.898 [2024-10-13 11:18:49.780613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.780985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.780999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.781015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.898 [2024-10-13 11:18:49.781029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.781046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.781075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.781091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.781104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.781120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.781134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.781150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.781164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.781179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.781193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.781215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.781230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.898 [2024-10-13 11:18:49.781246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.898 [2024-10-13 11:18:49.781259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.899 [2024-10-13 11:18:49.781447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.899 [2024-10-13 11:18:49.781507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.899 [2024-10-13 11:18:49.781536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.899 [2024-10-13 11:18:49.781663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.899 [2024-10-13 11:18:49.781795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.899 [2024-10-13 11:18:49.781880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.899 [2024-10-13 11:18:49.781970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.781985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.781998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.782022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.899 [2024-10-13 11:18:49.782038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.782053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.899 [2024-10-13 11:18:49.782067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.782082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.782095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.782110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.899 [2024-10-13 11:18:49.782141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.782157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.899 [2024-10-13 11:18:49.782171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.782187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.782200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.782216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.782230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.782245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.782259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.782274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.782288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.782303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.782317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.782332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.782346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.899 [2024-10-13 11:18:49.782373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.899 [2024-10-13 11:18:49.782388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.782417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.782456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.782486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.782518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.782548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.900 [2024-10-13 11:18:49.782577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.900 [2024-10-13 11:18:49.782606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.900 [2024-10-13 11:18:49.782635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.900 [2024-10-13 11:18:49.782694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.900 [2024-10-13 11:18:49.782726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.900 [2024-10-13 11:18:49.782756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.782786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.782816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.782854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.782885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.782916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.782946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.782978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.782995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.783009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.783026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.783065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.783081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.783095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.783110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.783124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.783140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.900 [2024-10-13 11:18:49.783154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.783169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8338e0 is same with the state(5) to be set 00:15:14.900 [2024-10-13 11:18:49.783185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.900 [2024-10-13 11:18:49.783196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.900 [2024-10-13 11:18:49.783207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82000 len:8 PRP1 0x0 PRP2 0x0 00:15:14.900 [2024-10-13 11:18:49.783220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.783267] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8338e0 was disconnected and freed. reset controller. 00:15:14.900 [2024-10-13 11:18:49.783287] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:14.900 [2024-10-13 11:18:49.783364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.900 [2024-10-13 11:18:49.783389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.783406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.900 [2024-10-13 11:18:49.783420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.783434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.900 [2024-10-13 11:18:49.783448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.783462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.900 [2024-10-13 11:18:49.783476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.900 [2024-10-13 11:18:49.783489] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:14.900 [2024-10-13 11:18:49.783522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7be690 (9): Bad file descriptor 00:15:14.900 [2024-10-13 11:18:49.786104] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:14.900 [2024-10-13 11:18:49.819967] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:14.900 00:15:14.900 Latency(us) 00:15:14.900 [2024-10-13T11:18:56.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.900 [2024-10-13T11:18:56.502Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:14.900 Verification LBA range: start 0x0 length 0x4000 00:15:14.900 NVMe0n1 : 15.01 13175.78 51.47 331.93 0.00 9457.78 603.23 15192.44 00:15:14.900 [2024-10-13T11:18:56.502Z] =================================================================================================================== 00:15:14.900 [2024-10-13T11:18:56.502Z] Total : 13175.78 51.47 331.93 0.00 9457.78 603.23 15192.44 00:15:14.900 Received shutdown signal, test time was about 15.000000 seconds 00:15:14.900 00:15:14.900 Latency(us) 00:15:14.900 [2024-10-13T11:18:56.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.900 [2024-10-13T11:18:56.502Z] =================================================================================================================== 00:15:14.900 [2024-10-13T11:18:56.502Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:14.901 11:18:55 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:14.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:14.901 11:18:55 -- host/failover.sh@65 -- # count=3 00:15:14.901 11:18:55 -- host/failover.sh@67 -- # (( count != 3 )) 00:15:14.901 11:18:55 -- host/failover.sh@73 -- # bdevperf_pid=69951 00:15:14.901 11:18:55 -- host/failover.sh@75 -- # waitforlisten 69951 /var/tmp/bdevperf.sock 00:15:14.901 11:18:55 -- common/autotest_common.sh@819 -- # '[' -z 69951 ']' 00:15:14.901 11:18:55 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:14.901 11:18:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:14.901 11:18:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:14.901 11:18:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:14.901 11:18:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:14.901 11:18:55 -- common/autotest_common.sh@10 -- # set +x 00:15:15.468 11:18:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:15.468 11:18:56 -- common/autotest_common.sh@852 -- # return 0 00:15:15.468 11:18:56 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:15.726 [2024-10-13 11:18:57.070928] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:15.727 11:18:57 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:15.727 [2024-10-13 11:18:57.303166] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:15.727 11:18:57 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:16.294 NVMe0n1 00:15:16.294 11:18:57 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:16.551 00:15:16.551 11:18:57 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:16.809 00:15:16.809 11:18:58 -- host/failover.sh@82 -- # grep -q NVMe0 00:15:16.809 11:18:58 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:17.068 11:18:58 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:17.326 11:18:58 -- host/failover.sh@87 -- # sleep 3 00:15:20.647 11:19:01 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:20.647 11:19:01 -- host/failover.sh@88 -- # grep -q NVMe0 00:15:20.647 11:19:02 -- host/failover.sh@90 -- # run_test_pid=70032 00:15:20.647 11:19:02 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:20.647 11:19:02 -- host/failover.sh@92 -- # wait 70032 00:15:21.582 0 00:15:21.582 11:19:03 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:21.582 [2024-10-13 11:18:55.838434] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:21.582 [2024-10-13 11:18:55.838548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69951 ] 00:15:21.582 [2024-10-13 11:18:55.977015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.582 [2024-10-13 11:18:56.034537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.582 [2024-10-13 11:18:58.733921] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:21.582 [2024-10-13 11:18:58.734038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.582 [2024-10-13 11:18:58.734063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.582 [2024-10-13 11:18:58.734081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.582 [2024-10-13 11:18:58.734093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.583 [2024-10-13 11:18:58.734107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.583 [2024-10-13 11:18:58.734119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.583 [2024-10-13 11:18:58.734132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.583 [2024-10-13 11:18:58.734144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.583 [2024-10-13 11:18:58.734157] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:21.583 [2024-10-13 11:18:58.734205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:21.583 [2024-10-13 11:18:58.734236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171d690 (9): Bad file descriptor 00:15:21.583 [2024-10-13 11:18:58.739094] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:21.583 Running I/O for 1 seconds... 00:15:21.583 00:15:21.583 Latency(us) 00:15:21.583 [2024-10-13T11:19:03.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.583 [2024-10-13T11:19:03.185Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:21.583 Verification LBA range: start 0x0 length 0x4000 00:15:21.583 NVMe0n1 : 1.01 13371.00 52.23 0.00 0.00 9522.54 942.08 10664.49 00:15:21.583 [2024-10-13T11:19:03.185Z] =================================================================================================================== 00:15:21.583 [2024-10-13T11:19:03.185Z] Total : 13371.00 52.23 0.00 0.00 9522.54 942.08 10664.49 00:15:21.583 11:19:03 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:21.583 11:19:03 -- host/failover.sh@95 -- # grep -q NVMe0 00:15:21.841 11:19:03 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:22.100 11:19:03 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:22.100 11:19:03 -- host/failover.sh@99 -- # grep -q NVMe0 00:15:22.359 11:19:03 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:22.618 11:19:04 -- host/failover.sh@101 -- # sleep 3 00:15:25.904 11:19:07 -- host/failover.sh@103 -- # grep -q NVMe0 00:15:25.904 11:19:07 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:25.904 11:19:07 -- host/failover.sh@108 -- # killprocess 69951 00:15:25.904 11:19:07 -- common/autotest_common.sh@926 -- # '[' -z 69951 ']' 00:15:25.904 11:19:07 -- common/autotest_common.sh@930 -- # kill -0 69951 00:15:25.904 11:19:07 -- common/autotest_common.sh@931 -- # uname 00:15:25.904 11:19:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:25.904 11:19:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69951 00:15:25.904 killing process with pid 69951 00:15:25.904 11:19:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:25.904 11:19:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:25.904 11:19:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69951' 00:15:25.904 11:19:07 -- common/autotest_common.sh@945 -- # kill 69951 00:15:25.904 11:19:07 -- common/autotest_common.sh@950 -- # wait 69951 00:15:26.167 11:19:07 -- host/failover.sh@110 -- # sync 00:15:26.167 11:19:07 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.733 11:19:08 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:26.733 11:19:08 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:26.733 11:19:08 -- host/failover.sh@116 -- # nvmftestfini 00:15:26.733 11:19:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:26.733 11:19:08 -- nvmf/common.sh@116 -- # sync 00:15:26.733 11:19:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:26.733 11:19:08 -- nvmf/common.sh@119 -- # set +e 00:15:26.733 11:19:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:26.733 11:19:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:26.733 rmmod nvme_tcp 00:15:26.733 rmmod nvme_fabrics 00:15:26.733 rmmod nvme_keyring 00:15:26.733 11:19:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:26.733 11:19:08 -- nvmf/common.sh@123 -- # set -e 00:15:26.733 11:19:08 -- nvmf/common.sh@124 -- # return 0 00:15:26.733 11:19:08 -- nvmf/common.sh@477 -- # '[' -n 69697 ']' 00:15:26.733 11:19:08 -- nvmf/common.sh@478 -- # killprocess 69697 00:15:26.733 11:19:08 -- common/autotest_common.sh@926 -- # '[' -z 69697 ']' 00:15:26.733 11:19:08 -- common/autotest_common.sh@930 -- # kill -0 69697 00:15:26.733 11:19:08 -- common/autotest_common.sh@931 -- # uname 00:15:26.733 11:19:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:26.733 11:19:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69697 00:15:26.733 killing process with pid 69697 00:15:26.733 11:19:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:26.733 11:19:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:26.733 11:19:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69697' 00:15:26.733 11:19:08 -- common/autotest_common.sh@945 -- # kill 69697 00:15:26.733 11:19:08 -- common/autotest_common.sh@950 -- # wait 69697 00:15:26.991 11:19:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:26.991 11:19:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:26.991 11:19:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:26.991 11:19:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:26.991 11:19:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:26.992 11:19:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.992 11:19:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.992 11:19:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.992 11:19:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:26.992 ************************************ 00:15:26.992 END TEST nvmf_failover 00:15:26.992 ************************************ 00:15:26.992 00:15:26.992 real 0m32.827s 00:15:26.992 user 2m7.290s 00:15:26.992 sys 0m5.691s 00:15:26.992 11:19:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:26.992 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:15:26.992 11:19:08 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:26.992 11:19:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:26.992 11:19:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:26.992 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:15:26.992 ************************************ 00:15:26.992 START TEST nvmf_discovery 00:15:26.992 ************************************ 00:15:26.992 11:19:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:26.992 * Looking for test storage... 00:15:26.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:26.992 11:19:08 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:26.992 11:19:08 -- nvmf/common.sh@7 -- # uname -s 00:15:26.992 11:19:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.992 11:19:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.992 11:19:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.992 11:19:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.992 11:19:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.992 11:19:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.992 11:19:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.992 11:19:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.992 11:19:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.992 11:19:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.992 11:19:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:15:26.992 11:19:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:15:26.992 11:19:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.992 11:19:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.992 11:19:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:26.992 11:19:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:26.992 11:19:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.992 11:19:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.992 11:19:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.992 11:19:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.992 11:19:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.992 11:19:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.992 11:19:08 -- paths/export.sh@5 -- # export PATH 00:15:26.992 11:19:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.992 11:19:08 -- nvmf/common.sh@46 -- # : 0 00:15:26.992 11:19:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:26.992 11:19:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:26.992 11:19:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:26.992 11:19:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.992 11:19:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.992 11:19:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:26.992 11:19:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:26.992 11:19:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:26.992 11:19:08 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:26.992 11:19:08 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:26.992 11:19:08 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:26.992 11:19:08 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:26.992 11:19:08 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:26.992 11:19:08 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:26.992 11:19:08 -- host/discovery.sh@25 -- # nvmftestinit 00:15:26.992 11:19:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:26.992 11:19:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.992 11:19:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:26.992 11:19:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:26.992 11:19:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:26.992 11:19:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.992 11:19:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.992 11:19:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.992 11:19:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:26.992 11:19:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:26.992 11:19:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:26.992 11:19:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:26.992 11:19:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:26.992 11:19:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:26.992 11:19:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.992 11:19:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.992 11:19:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:26.992 11:19:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:26.992 11:19:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:26.992 11:19:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:26.992 11:19:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:26.992 11:19:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.992 11:19:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:26.992 11:19:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:26.992 11:19:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:26.992 11:19:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:26.992 11:19:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:26.992 11:19:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:26.992 Cannot find device "nvmf_tgt_br" 00:15:26.992 11:19:08 -- nvmf/common.sh@154 -- # true 00:15:26.992 11:19:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:26.992 Cannot find device "nvmf_tgt_br2" 00:15:26.992 11:19:08 -- nvmf/common.sh@155 -- # true 00:15:26.992 11:19:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:26.992 11:19:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:27.252 Cannot find device "nvmf_tgt_br" 00:15:27.252 11:19:08 -- nvmf/common.sh@157 -- # true 00:15:27.252 11:19:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:27.252 Cannot find device "nvmf_tgt_br2" 00:15:27.252 11:19:08 -- nvmf/common.sh@158 -- # true 00:15:27.252 11:19:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:27.252 11:19:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:27.252 11:19:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.252 11:19:08 -- nvmf/common.sh@161 -- # true 00:15:27.252 11:19:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.252 11:19:08 -- nvmf/common.sh@162 -- # true 00:15:27.252 11:19:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:27.252 11:19:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:27.252 11:19:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:27.252 11:19:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:27.252 11:19:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:27.252 11:19:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:27.252 11:19:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:27.252 11:19:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:27.252 11:19:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:27.252 11:19:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:27.252 11:19:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:27.252 11:19:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:27.252 11:19:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:27.252 11:19:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:27.252 11:19:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:27.252 11:19:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:27.252 11:19:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:27.252 11:19:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:27.252 11:19:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:27.252 11:19:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:27.252 11:19:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:27.252 11:19:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:27.252 11:19:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:27.252 11:19:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:27.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:27.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:15:27.252 00:15:27.252 --- 10.0.0.2 ping statistics --- 00:15:27.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.252 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:27.252 11:19:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:27.252 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:27.252 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:15:27.252 00:15:27.252 --- 10.0.0.3 ping statistics --- 00:15:27.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.252 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:27.252 11:19:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:27.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:15:27.252 00:15:27.252 --- 10.0.0.1 ping statistics --- 00:15:27.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.252 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:27.252 11:19:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.252 11:19:08 -- nvmf/common.sh@421 -- # return 0 00:15:27.252 11:19:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:27.252 11:19:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.252 11:19:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:27.252 11:19:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:27.252 11:19:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.252 11:19:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:27.252 11:19:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:27.511 11:19:08 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:27.511 11:19:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:27.511 11:19:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:27.511 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:15:27.511 11:19:08 -- nvmf/common.sh@469 -- # nvmfpid=70296 00:15:27.511 11:19:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:27.511 11:19:08 -- nvmf/common.sh@470 -- # waitforlisten 70296 00:15:27.511 11:19:08 -- common/autotest_common.sh@819 -- # '[' -z 70296 ']' 00:15:27.511 11:19:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.511 11:19:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:27.511 11:19:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.511 11:19:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:27.511 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:15:27.511 [2024-10-13 11:19:08.921532] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:27.511 [2024-10-13 11:19:08.921625] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.511 [2024-10-13 11:19:09.055979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.770 [2024-10-13 11:19:09.110250] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:27.770 [2024-10-13 11:19:09.110630] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.770 [2024-10-13 11:19:09.110753] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.770 [2024-10-13 11:19:09.110832] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.770 [2024-10-13 11:19:09.110931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.338 11:19:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:28.338 11:19:09 -- common/autotest_common.sh@852 -- # return 0 00:15:28.338 11:19:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:28.338 11:19:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:28.338 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 11:19:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.597 11:19:09 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:28.597 11:19:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:28.597 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 [2024-10-13 11:19:09.947463] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.597 11:19:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:28.597 11:19:09 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:28.597 11:19:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:28.597 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 [2024-10-13 11:19:09.955567] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:28.597 11:19:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:28.597 11:19:09 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:28.597 11:19:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:28.597 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 null0 00:15:28.597 11:19:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:28.597 11:19:09 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:28.597 11:19:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:28.597 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 null1 00:15:28.597 11:19:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:28.597 11:19:09 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:28.597 11:19:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:28.597 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 11:19:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:28.597 11:19:09 -- host/discovery.sh@45 -- # hostpid=70330 00:15:28.597 11:19:09 -- host/discovery.sh@46 -- # waitforlisten 70330 /tmp/host.sock 00:15:28.597 11:19:09 -- common/autotest_common.sh@819 -- # '[' -z 70330 ']' 00:15:28.597 11:19:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:15:28.597 11:19:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:28.597 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:28.597 11:19:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:28.597 11:19:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:28.597 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 11:19:09 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:28.597 [2024-10-13 11:19:10.042566] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:28.597 [2024-10-13 11:19:10.042651] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70330 ] 00:15:28.597 [2024-10-13 11:19:10.181735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.855 [2024-10-13 11:19:10.251032] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:28.855 [2024-10-13 11:19:10.251232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.421 11:19:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:29.421 11:19:10 -- common/autotest_common.sh@852 -- # return 0 00:15:29.421 11:19:10 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:29.421 11:19:10 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:29.421 11:19:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.421 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:15:29.421 11:19:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.421 11:19:10 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:29.421 11:19:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.421 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:15:29.421 11:19:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.421 11:19:10 -- host/discovery.sh@72 -- # notify_id=0 00:15:29.421 11:19:10 -- host/discovery.sh@78 -- # get_subsystem_names 00:15:29.421 11:19:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:29.421 11:19:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.421 11:19:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:29.421 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:15:29.421 11:19:10 -- host/discovery.sh@59 -- # sort 00:15:29.421 11:19:10 -- host/discovery.sh@59 -- # xargs 00:15:29.421 11:19:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.679 11:19:11 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:15:29.679 11:19:11 -- host/discovery.sh@79 -- # get_bdev_list 00:15:29.679 11:19:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:29.679 11:19:11 -- host/discovery.sh@55 -- # sort 00:15:29.679 11:19:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:29.679 11:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.679 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:15:29.679 11:19:11 -- host/discovery.sh@55 -- # xargs 00:15:29.679 11:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.679 11:19:11 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:15:29.679 11:19:11 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:29.679 11:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.679 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:15:29.679 11:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.679 11:19:11 -- host/discovery.sh@82 -- # get_subsystem_names 00:15:29.679 11:19:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:29.679 11:19:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:29.679 11:19:11 -- host/discovery.sh@59 -- # xargs 00:15:29.679 11:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.679 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:15:29.679 11:19:11 -- host/discovery.sh@59 -- # sort 00:15:29.679 11:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.679 11:19:11 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:15:29.679 11:19:11 -- host/discovery.sh@83 -- # get_bdev_list 00:15:29.679 11:19:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:29.679 11:19:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:29.679 11:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.679 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:15:29.679 11:19:11 -- host/discovery.sh@55 -- # sort 00:15:29.679 11:19:11 -- host/discovery.sh@55 -- # xargs 00:15:29.679 11:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.679 11:19:11 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:29.679 11:19:11 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:29.679 11:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.679 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:15:29.679 11:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.679 11:19:11 -- host/discovery.sh@86 -- # get_subsystem_names 00:15:29.679 11:19:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:29.679 11:19:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:29.679 11:19:11 -- host/discovery.sh@59 -- # sort 00:15:29.679 11:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.679 11:19:11 -- host/discovery.sh@59 -- # xargs 00:15:29.679 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:15:29.679 11:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.938 11:19:11 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:15:29.938 11:19:11 -- host/discovery.sh@87 -- # get_bdev_list 00:15:29.938 11:19:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:29.938 11:19:11 -- host/discovery.sh@55 -- # sort 00:15:29.938 11:19:11 -- host/discovery.sh@55 -- # xargs 00:15:29.938 11:19:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:29.938 11:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.938 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:15:29.938 11:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.938 11:19:11 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:29.938 11:19:11 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:29.938 11:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.938 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:15:29.938 [2024-10-13 11:19:11.331980] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.938 11:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.938 11:19:11 -- host/discovery.sh@92 -- # get_subsystem_names 00:15:29.938 11:19:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:29.938 11:19:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:29.938 11:19:11 -- host/discovery.sh@59 -- # sort 00:15:29.938 11:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.938 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:15:29.938 11:19:11 -- host/discovery.sh@59 -- # xargs 00:15:29.938 11:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.938 11:19:11 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:29.938 11:19:11 -- host/discovery.sh@93 -- # get_bdev_list 00:15:29.938 11:19:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:29.938 11:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.938 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:15:29.938 11:19:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:29.938 11:19:11 -- host/discovery.sh@55 -- # sort 00:15:29.938 11:19:11 -- host/discovery.sh@55 -- # xargs 00:15:29.938 11:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.938 11:19:11 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:15:29.938 11:19:11 -- host/discovery.sh@94 -- # get_notification_count 00:15:29.938 11:19:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:29.938 11:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.938 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:15:29.938 11:19:11 -- host/discovery.sh@74 -- # jq '. | length' 00:15:29.938 11:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.938 11:19:11 -- host/discovery.sh@74 -- # notification_count=0 00:15:29.938 11:19:11 -- host/discovery.sh@75 -- # notify_id=0 00:15:29.938 11:19:11 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:15:29.938 11:19:11 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:29.938 11:19:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.938 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:15:29.938 11:19:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.938 11:19:11 -- host/discovery.sh@100 -- # sleep 1 00:15:30.504 [2024-10-13 11:19:11.980918] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:30.504 [2024-10-13 11:19:11.980974] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:30.504 [2024-10-13 11:19:11.980993] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:30.504 [2024-10-13 11:19:11.987011] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:30.504 [2024-10-13 11:19:12.042814] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:30.504 [2024-10-13 11:19:12.042845] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:31.071 11:19:12 -- host/discovery.sh@101 -- # get_subsystem_names 00:15:31.071 11:19:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:31.071 11:19:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:31.071 11:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.071 11:19:12 -- host/discovery.sh@59 -- # sort 00:15:31.071 11:19:12 -- common/autotest_common.sh@10 -- # set +x 00:15:31.071 11:19:12 -- host/discovery.sh@59 -- # xargs 00:15:31.071 11:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.071 11:19:12 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.071 11:19:12 -- host/discovery.sh@102 -- # get_bdev_list 00:15:31.071 11:19:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:31.071 11:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.071 11:19:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:31.071 11:19:12 -- host/discovery.sh@55 -- # sort 00:15:31.071 11:19:12 -- common/autotest_common.sh@10 -- # set +x 00:15:31.071 11:19:12 -- host/discovery.sh@55 -- # xargs 00:15:31.071 11:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.071 11:19:12 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:31.071 11:19:12 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:15:31.071 11:19:12 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:31.071 11:19:12 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:31.071 11:19:12 -- host/discovery.sh@63 -- # sort -n 00:15:31.071 11:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.071 11:19:12 -- common/autotest_common.sh@10 -- # set +x 00:15:31.071 11:19:12 -- host/discovery.sh@63 -- # xargs 00:15:31.071 11:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.071 11:19:12 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:15:31.071 11:19:12 -- host/discovery.sh@104 -- # get_notification_count 00:15:31.330 11:19:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:31.330 11:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.330 11:19:12 -- host/discovery.sh@74 -- # jq '. | length' 00:15:31.330 11:19:12 -- common/autotest_common.sh@10 -- # set +x 00:15:31.330 11:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.330 11:19:12 -- host/discovery.sh@74 -- # notification_count=1 00:15:31.330 11:19:12 -- host/discovery.sh@75 -- # notify_id=1 00:15:31.330 11:19:12 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:15:31.330 11:19:12 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:31.330 11:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.330 11:19:12 -- common/autotest_common.sh@10 -- # set +x 00:15:31.330 11:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.330 11:19:12 -- host/discovery.sh@109 -- # sleep 1 00:15:32.268 11:19:13 -- host/discovery.sh@110 -- # get_bdev_list 00:15:32.268 11:19:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:32.268 11:19:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:32.268 11:19:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.268 11:19:13 -- common/autotest_common.sh@10 -- # set +x 00:15:32.268 11:19:13 -- host/discovery.sh@55 -- # sort 00:15:32.268 11:19:13 -- host/discovery.sh@55 -- # xargs 00:15:32.268 11:19:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.268 11:19:13 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:32.268 11:19:13 -- host/discovery.sh@111 -- # get_notification_count 00:15:32.268 11:19:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:32.268 11:19:13 -- host/discovery.sh@74 -- # jq '. | length' 00:15:32.268 11:19:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.268 11:19:13 -- common/autotest_common.sh@10 -- # set +x 00:15:32.268 11:19:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.268 11:19:13 -- host/discovery.sh@74 -- # notification_count=1 00:15:32.268 11:19:13 -- host/discovery.sh@75 -- # notify_id=2 00:15:32.268 11:19:13 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:15:32.268 11:19:13 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:32.268 11:19:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.268 11:19:13 -- common/autotest_common.sh@10 -- # set +x 00:15:32.268 [2024-10-13 11:19:13.843400] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:32.268 [2024-10-13 11:19:13.844178] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:32.268 [2024-10-13 11:19:13.844207] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:32.268 11:19:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.268 11:19:13 -- host/discovery.sh@117 -- # sleep 1 00:15:32.268 [2024-10-13 11:19:13.850176] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:32.527 [2024-10-13 11:19:13.914529] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:32.527 [2024-10-13 11:19:13.914572] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:32.527 [2024-10-13 11:19:13.914587] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:33.463 11:19:14 -- host/discovery.sh@118 -- # get_subsystem_names 00:15:33.463 11:19:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.463 11:19:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.463 11:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.463 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:15:33.463 11:19:14 -- host/discovery.sh@59 -- # xargs 00:15:33.463 11:19:14 -- host/discovery.sh@59 -- # sort 00:15:33.463 11:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.463 11:19:14 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.463 11:19:14 -- host/discovery.sh@119 -- # get_bdev_list 00:15:33.463 11:19:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.463 11:19:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.463 11:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.463 11:19:14 -- host/discovery.sh@55 -- # sort 00:15:33.463 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:15:33.463 11:19:14 -- host/discovery.sh@55 -- # xargs 00:15:33.463 11:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.463 11:19:14 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:33.463 11:19:14 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:15:33.463 11:19:14 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:33.463 11:19:14 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:33.463 11:19:14 -- host/discovery.sh@63 -- # sort -n 00:15:33.463 11:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.463 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:15:33.463 11:19:14 -- host/discovery.sh@63 -- # xargs 00:15:33.463 11:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.463 11:19:15 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:33.463 11:19:15 -- host/discovery.sh@121 -- # get_notification_count 00:15:33.463 11:19:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:33.463 11:19:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.463 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:15:33.463 11:19:15 -- host/discovery.sh@74 -- # jq '. | length' 00:15:33.464 11:19:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.722 11:19:15 -- host/discovery.sh@74 -- # notification_count=0 00:15:33.722 11:19:15 -- host/discovery.sh@75 -- # notify_id=2 00:15:33.722 11:19:15 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:15:33.722 11:19:15 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:33.722 11:19:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.722 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:15:33.722 [2024-10-13 11:19:15.073933] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:33.722 [2024-10-13 11:19:15.073988] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:33.722 11:19:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.722 11:19:15 -- host/discovery.sh@127 -- # sleep 1 00:15:33.722 [2024-10-13 11:19:15.079926] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:33.722 [2024-10-13 11:19:15.079976] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:33.722 [2024-10-13 11:19:15.080076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.722 [2024-10-13 11:19:15.080103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.722 [2024-10-13 11:19:15.080115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.722 [2024-10-13 11:19:15.080124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.722 [2024-10-13 11:19:15.080132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.722 [2024-10-13 11:19:15.080141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.722 [2024-10-13 11:19:15.080149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.722 [2024-10-13 11:19:15.080157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.722 [2024-10-13 11:19:15.080180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1682c10 is same with the state(5) to be set 00:15:34.658 11:19:16 -- host/discovery.sh@128 -- # get_subsystem_names 00:15:34.658 11:19:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:34.658 11:19:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:34.658 11:19:16 -- host/discovery.sh@59 -- # sort 00:15:34.658 11:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.658 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:34.658 11:19:16 -- host/discovery.sh@59 -- # xargs 00:15:34.658 11:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.658 11:19:16 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.658 11:19:16 -- host/discovery.sh@129 -- # get_bdev_list 00:15:34.658 11:19:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.658 11:19:16 -- host/discovery.sh@55 -- # sort 00:15:34.658 11:19:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:34.658 11:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.658 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:34.658 11:19:16 -- host/discovery.sh@55 -- # xargs 00:15:34.658 11:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.658 11:19:16 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:34.658 11:19:16 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:15:34.658 11:19:16 -- host/discovery.sh@63 -- # sort -n 00:15:34.658 11:19:16 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:34.658 11:19:16 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:34.658 11:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.658 11:19:16 -- host/discovery.sh@63 -- # xargs 00:15:34.658 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:34.658 11:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.658 11:19:16 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:15:34.658 11:19:16 -- host/discovery.sh@131 -- # get_notification_count 00:15:34.659 11:19:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:34.659 11:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.659 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:34.659 11:19:16 -- host/discovery.sh@74 -- # jq '. | length' 00:15:34.918 11:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.918 11:19:16 -- host/discovery.sh@74 -- # notification_count=0 00:15:34.918 11:19:16 -- host/discovery.sh@75 -- # notify_id=2 00:15:34.918 11:19:16 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:15:34.918 11:19:16 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:34.918 11:19:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.918 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:34.918 11:19:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.918 11:19:16 -- host/discovery.sh@135 -- # sleep 1 00:15:35.922 11:19:17 -- host/discovery.sh@136 -- # get_subsystem_names 00:15:35.922 11:19:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:35.922 11:19:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:35.922 11:19:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.922 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:15:35.922 11:19:17 -- host/discovery.sh@59 -- # sort 00:15:35.922 11:19:17 -- host/discovery.sh@59 -- # xargs 00:15:35.922 11:19:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.922 11:19:17 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:15:35.922 11:19:17 -- host/discovery.sh@137 -- # get_bdev_list 00:15:35.922 11:19:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.922 11:19:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.922 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:15:35.922 11:19:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.922 11:19:17 -- host/discovery.sh@55 -- # sort 00:15:35.922 11:19:17 -- host/discovery.sh@55 -- # xargs 00:15:35.922 11:19:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.922 11:19:17 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:15:35.922 11:19:17 -- host/discovery.sh@138 -- # get_notification_count 00:15:35.922 11:19:17 -- host/discovery.sh@74 -- # jq '. | length' 00:15:35.922 11:19:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:35.922 11:19:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.922 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:15:35.922 11:19:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.922 11:19:17 -- host/discovery.sh@74 -- # notification_count=2 00:15:35.922 11:19:17 -- host/discovery.sh@75 -- # notify_id=4 00:15:35.922 11:19:17 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:15:35.922 11:19:17 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:35.922 11:19:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.922 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:15:37.303 [2024-10-13 11:19:18.493317] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:37.303 [2024-10-13 11:19:18.493394] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:37.303 [2024-10-13 11:19:18.493414] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:37.303 [2024-10-13 11:19:18.499366] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:37.303 [2024-10-13 11:19:18.558378] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:37.303 [2024-10-13 11:19:18.558438] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:37.303 11:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.303 11:19:18 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.303 11:19:18 -- common/autotest_common.sh@640 -- # local es=0 00:15:37.303 11:19:18 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.303 11:19:18 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:15:37.303 11:19:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:37.303 11:19:18 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:15:37.303 11:19:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:37.303 11:19:18 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.303 11:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.303 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:37.303 request: 00:15:37.303 { 00:15:37.303 "name": "nvme", 00:15:37.303 "trtype": "tcp", 00:15:37.303 "traddr": "10.0.0.2", 00:15:37.303 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:37.303 "adrfam": "ipv4", 00:15:37.303 "trsvcid": "8009", 00:15:37.303 "wait_for_attach": true, 00:15:37.303 "method": "bdev_nvme_start_discovery", 00:15:37.303 "req_id": 1 00:15:37.303 } 00:15:37.303 Got JSON-RPC error response 00:15:37.303 response: 00:15:37.303 { 00:15:37.303 "code": -17, 00:15:37.303 "message": "File exists" 00:15:37.303 } 00:15:37.303 11:19:18 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:37.303 11:19:18 -- common/autotest_common.sh@643 -- # es=1 00:15:37.303 11:19:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:37.303 11:19:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:37.303 11:19:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:37.303 11:19:18 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:15:37.303 11:19:18 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:37.303 11:19:18 -- host/discovery.sh@67 -- # sort 00:15:37.303 11:19:18 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:37.303 11:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.303 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:37.303 11:19:18 -- host/discovery.sh@67 -- # xargs 00:15:37.303 11:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.303 11:19:18 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:15:37.303 11:19:18 -- host/discovery.sh@147 -- # get_bdev_list 00:15:37.303 11:19:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:37.303 11:19:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:37.303 11:19:18 -- host/discovery.sh@55 -- # sort 00:15:37.303 11:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.303 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:37.303 11:19:18 -- host/discovery.sh@55 -- # xargs 00:15:37.303 11:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.303 11:19:18 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:37.303 11:19:18 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.303 11:19:18 -- common/autotest_common.sh@640 -- # local es=0 00:15:37.303 11:19:18 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.303 11:19:18 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:15:37.303 11:19:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:37.303 11:19:18 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:15:37.303 11:19:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:37.303 11:19:18 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.303 11:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.303 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:37.303 request: 00:15:37.303 { 00:15:37.303 "name": "nvme_second", 00:15:37.303 "trtype": "tcp", 00:15:37.303 "traddr": "10.0.0.2", 00:15:37.303 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:37.303 "adrfam": "ipv4", 00:15:37.303 "trsvcid": "8009", 00:15:37.303 "wait_for_attach": true, 00:15:37.303 "method": "bdev_nvme_start_discovery", 00:15:37.303 "req_id": 1 00:15:37.303 } 00:15:37.303 Got JSON-RPC error response 00:15:37.303 response: 00:15:37.303 { 00:15:37.303 "code": -17, 00:15:37.303 "message": "File exists" 00:15:37.303 } 00:15:37.303 11:19:18 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:37.303 11:19:18 -- common/autotest_common.sh@643 -- # es=1 00:15:37.303 11:19:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:37.303 11:19:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:37.303 11:19:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:37.303 11:19:18 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:15:37.303 11:19:18 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:37.303 11:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.303 11:19:18 -- host/discovery.sh@67 -- # sort 00:15:37.303 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:37.303 11:19:18 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:37.303 11:19:18 -- host/discovery.sh@67 -- # xargs 00:15:37.303 11:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.303 11:19:18 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:15:37.303 11:19:18 -- host/discovery.sh@153 -- # get_bdev_list 00:15:37.303 11:19:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:37.303 11:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.303 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:37.303 11:19:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:37.303 11:19:18 -- host/discovery.sh@55 -- # sort 00:15:37.303 11:19:18 -- host/discovery.sh@55 -- # xargs 00:15:37.303 11:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.303 11:19:18 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:37.303 11:19:18 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:37.303 11:19:18 -- common/autotest_common.sh@640 -- # local es=0 00:15:37.303 11:19:18 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:37.303 11:19:18 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:15:37.303 11:19:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:37.303 11:19:18 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:15:37.303 11:19:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:37.303 11:19:18 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:37.303 11:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.303 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 [2024-10-13 11:19:19.800698] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:38.239 [2024-10-13 11:19:19.800832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:38.239 [2024-10-13 11:19:19.800874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:38.239 [2024-10-13 11:19:19.800892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d4270 with addr=10.0.0.2, port=8010 00:15:38.239 [2024-10-13 11:19:19.800908] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:38.240 [2024-10-13 11:19:19.800917] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:38.240 [2024-10-13 11:19:19.800926] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:39.617 [2024-10-13 11:19:20.800704] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:39.617 [2024-10-13 11:19:20.800839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:39.617 [2024-10-13 11:19:20.800879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:39.617 [2024-10-13 11:19:20.800895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d4270 with addr=10.0.0.2, port=8010 00:15:39.617 [2024-10-13 11:19:20.800912] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:39.617 [2024-10-13 11:19:20.800921] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:39.617 [2024-10-13 11:19:20.800929] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:40.552 [2024-10-13 11:19:21.800551] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:40.552 request: 00:15:40.552 { 00:15:40.552 "name": "nvme_second", 00:15:40.552 "trtype": "tcp", 00:15:40.552 "traddr": "10.0.0.2", 00:15:40.552 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:40.552 "adrfam": "ipv4", 00:15:40.552 "trsvcid": "8010", 00:15:40.552 "attach_timeout_ms": 3000, 00:15:40.552 "method": "bdev_nvme_start_discovery", 00:15:40.552 "req_id": 1 00:15:40.552 } 00:15:40.552 Got JSON-RPC error response 00:15:40.552 response: 00:15:40.552 { 00:15:40.552 "code": -110, 00:15:40.552 "message": "Connection timed out" 00:15:40.552 } 00:15:40.552 11:19:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:40.552 11:19:21 -- common/autotest_common.sh@643 -- # es=1 00:15:40.552 11:19:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:40.552 11:19:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:40.552 11:19:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:40.552 11:19:21 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:15:40.552 11:19:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:40.552 11:19:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:40.552 11:19:21 -- host/discovery.sh@67 -- # sort 00:15:40.552 11:19:21 -- host/discovery.sh@67 -- # xargs 00:15:40.552 11:19:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:40.552 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:15:40.552 11:19:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:40.552 11:19:21 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:15:40.552 11:19:21 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:15:40.552 11:19:21 -- host/discovery.sh@162 -- # kill 70330 00:15:40.552 11:19:21 -- host/discovery.sh@163 -- # nvmftestfini 00:15:40.552 11:19:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:40.552 11:19:21 -- nvmf/common.sh@116 -- # sync 00:15:40.552 11:19:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:40.552 11:19:21 -- nvmf/common.sh@119 -- # set +e 00:15:40.552 11:19:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:40.552 11:19:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:40.552 rmmod nvme_tcp 00:15:40.552 rmmod nvme_fabrics 00:15:40.552 rmmod nvme_keyring 00:15:40.552 11:19:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:40.552 11:19:21 -- nvmf/common.sh@123 -- # set -e 00:15:40.552 11:19:21 -- nvmf/common.sh@124 -- # return 0 00:15:40.552 11:19:21 -- nvmf/common.sh@477 -- # '[' -n 70296 ']' 00:15:40.552 11:19:21 -- nvmf/common.sh@478 -- # killprocess 70296 00:15:40.552 11:19:21 -- common/autotest_common.sh@926 -- # '[' -z 70296 ']' 00:15:40.553 11:19:21 -- common/autotest_common.sh@930 -- # kill -0 70296 00:15:40.553 11:19:21 -- common/autotest_common.sh@931 -- # uname 00:15:40.553 11:19:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:40.553 11:19:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70296 00:15:40.553 11:19:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:40.553 killing process with pid 70296 00:15:40.553 11:19:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:40.553 11:19:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70296' 00:15:40.553 11:19:21 -- common/autotest_common.sh@945 -- # kill 70296 00:15:40.553 11:19:21 -- common/autotest_common.sh@950 -- # wait 70296 00:15:40.812 11:19:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:40.812 11:19:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:40.812 11:19:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:40.812 11:19:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:40.812 11:19:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:40.812 11:19:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.812 11:19:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.812 11:19:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.812 11:19:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:40.812 00:15:40.812 real 0m13.777s 00:15:40.812 user 0m26.510s 00:15:40.812 sys 0m2.149s 00:15:40.812 11:19:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.812 11:19:22 -- common/autotest_common.sh@10 -- # set +x 00:15:40.812 ************************************ 00:15:40.812 END TEST nvmf_discovery 00:15:40.812 ************************************ 00:15:40.812 11:19:22 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:40.812 11:19:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:40.812 11:19:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:40.812 11:19:22 -- common/autotest_common.sh@10 -- # set +x 00:15:40.812 ************************************ 00:15:40.812 START TEST nvmf_discovery_remove_ifc 00:15:40.812 ************************************ 00:15:40.812 11:19:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:40.812 * Looking for test storage... 00:15:40.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:40.812 11:19:22 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:40.812 11:19:22 -- nvmf/common.sh@7 -- # uname -s 00:15:40.812 11:19:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.812 11:19:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.812 11:19:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.812 11:19:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.812 11:19:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.812 11:19:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.812 11:19:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.812 11:19:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.812 11:19:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.812 11:19:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.812 11:19:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:15:40.812 11:19:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:15:40.812 11:19:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.812 11:19:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.812 11:19:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:40.812 11:19:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:40.812 11:19:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.812 11:19:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.812 11:19:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.812 11:19:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.812 11:19:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.812 11:19:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.812 11:19:22 -- paths/export.sh@5 -- # export PATH 00:15:40.813 11:19:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.813 11:19:22 -- nvmf/common.sh@46 -- # : 0 00:15:40.813 11:19:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:40.813 11:19:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:40.813 11:19:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:40.813 11:19:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.813 11:19:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.813 11:19:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:40.813 11:19:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:40.813 11:19:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:40.813 11:19:22 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:40.813 11:19:22 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:40.813 11:19:22 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:40.813 11:19:22 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:40.813 11:19:22 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:40.813 11:19:22 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:40.813 11:19:22 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:40.813 11:19:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:40.813 11:19:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.813 11:19:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:40.813 11:19:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:40.813 11:19:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:40.813 11:19:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.813 11:19:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.813 11:19:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.813 11:19:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:40.813 11:19:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:40.813 11:19:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:40.813 11:19:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:40.813 11:19:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:40.813 11:19:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:40.813 11:19:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.813 11:19:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.813 11:19:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:40.813 11:19:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:40.813 11:19:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:40.813 11:19:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:40.813 11:19:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:40.813 11:19:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.813 11:19:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:40.813 11:19:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:40.813 11:19:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:40.813 11:19:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:40.813 11:19:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:40.813 11:19:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:40.813 Cannot find device "nvmf_tgt_br" 00:15:40.813 11:19:22 -- nvmf/common.sh@154 -- # true 00:15:40.813 11:19:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:40.813 Cannot find device "nvmf_tgt_br2" 00:15:40.813 11:19:22 -- nvmf/common.sh@155 -- # true 00:15:40.813 11:19:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:40.813 11:19:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:41.072 Cannot find device "nvmf_tgt_br" 00:15:41.072 11:19:22 -- nvmf/common.sh@157 -- # true 00:15:41.072 11:19:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:41.072 Cannot find device "nvmf_tgt_br2" 00:15:41.072 11:19:22 -- nvmf/common.sh@158 -- # true 00:15:41.072 11:19:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:41.072 11:19:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:41.072 11:19:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.072 11:19:22 -- nvmf/common.sh@161 -- # true 00:15:41.072 11:19:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.072 11:19:22 -- nvmf/common.sh@162 -- # true 00:15:41.072 11:19:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.072 11:19:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.072 11:19:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.072 11:19:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.072 11:19:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.072 11:19:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.072 11:19:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.072 11:19:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:41.072 11:19:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:41.072 11:19:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:41.072 11:19:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:41.072 11:19:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:41.072 11:19:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:41.072 11:19:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.072 11:19:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:41.072 11:19:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:41.072 11:19:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:41.072 11:19:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:41.072 11:19:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:41.072 11:19:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:41.072 11:19:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:41.072 11:19:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:41.072 11:19:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:41.072 11:19:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:41.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:15:41.072 00:15:41.072 --- 10.0.0.2 ping statistics --- 00:15:41.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.072 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:41.072 11:19:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:41.072 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:41.072 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:41.072 00:15:41.072 --- 10.0.0.3 ping statistics --- 00:15:41.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.072 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:41.072 11:19:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:41.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:15:41.072 00:15:41.072 --- 10.0.0.1 ping statistics --- 00:15:41.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.072 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:15:41.331 11:19:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.331 11:19:22 -- nvmf/common.sh@421 -- # return 0 00:15:41.331 11:19:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:41.331 11:19:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.331 11:19:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:41.331 11:19:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:41.331 11:19:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.331 11:19:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:41.331 11:19:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:41.331 11:19:22 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:41.331 11:19:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:41.331 11:19:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:41.331 11:19:22 -- common/autotest_common.sh@10 -- # set +x 00:15:41.331 11:19:22 -- nvmf/common.sh@469 -- # nvmfpid=70830 00:15:41.331 11:19:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:41.331 11:19:22 -- nvmf/common.sh@470 -- # waitforlisten 70830 00:15:41.331 11:19:22 -- common/autotest_common.sh@819 -- # '[' -z 70830 ']' 00:15:41.331 11:19:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.331 11:19:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:41.331 11:19:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.331 11:19:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:41.331 11:19:22 -- common/autotest_common.sh@10 -- # set +x 00:15:41.331 [2024-10-13 11:19:22.749195] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:41.331 [2024-10-13 11:19:22.749292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.331 [2024-10-13 11:19:22.890417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.590 [2024-10-13 11:19:22.984143] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:41.590 [2024-10-13 11:19:22.984315] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.590 [2024-10-13 11:19:22.984347] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.590 [2024-10-13 11:19:22.984367] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.590 [2024-10-13 11:19:22.984415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.158 11:19:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:42.158 11:19:23 -- common/autotest_common.sh@852 -- # return 0 00:15:42.158 11:19:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:42.158 11:19:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:42.158 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:15:42.417 11:19:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.417 11:19:23 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:42.417 11:19:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.417 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:15:42.417 [2024-10-13 11:19:23.798141] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.417 [2024-10-13 11:19:23.806242] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:42.417 null0 00:15:42.417 [2024-10-13 11:19:23.838203] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.417 11:19:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.417 11:19:23 -- host/discovery_remove_ifc.sh@59 -- # hostpid=70862 00:15:42.417 11:19:23 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:42.417 11:19:23 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 70862 /tmp/host.sock 00:15:42.417 11:19:23 -- common/autotest_common.sh@819 -- # '[' -z 70862 ']' 00:15:42.417 11:19:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:15:42.417 11:19:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:42.417 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:42.417 11:19:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:42.417 11:19:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:42.417 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:15:42.417 [2024-10-13 11:19:23.908504] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:42.417 [2024-10-13 11:19:23.908613] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70862 ] 00:15:42.676 [2024-10-13 11:19:24.045733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.676 [2024-10-13 11:19:24.116065] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:42.676 [2024-10-13 11:19:24.116282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.676 11:19:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:42.676 11:19:24 -- common/autotest_common.sh@852 -- # return 0 00:15:42.676 11:19:24 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:42.676 11:19:24 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:42.676 11:19:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.676 11:19:24 -- common/autotest_common.sh@10 -- # set +x 00:15:42.676 11:19:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.676 11:19:24 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:42.676 11:19:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.676 11:19:24 -- common/autotest_common.sh@10 -- # set +x 00:15:42.676 11:19:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.676 11:19:24 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:42.676 11:19:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.676 11:19:24 -- common/autotest_common.sh@10 -- # set +x 00:15:43.652 [2024-10-13 11:19:25.251190] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:43.652 [2024-10-13 11:19:25.251243] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:43.652 [2024-10-13 11:19:25.251263] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:43.910 [2024-10-13 11:19:25.257248] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:43.910 [2024-10-13 11:19:25.313308] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:43.910 [2024-10-13 11:19:25.313403] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:43.910 [2024-10-13 11:19:25.313431] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:43.910 [2024-10-13 11:19:25.313448] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:43.910 [2024-10-13 11:19:25.313475] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:43.910 11:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:43.910 [2024-10-13 11:19:25.319683] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xe73be0 was disconnected and freed. delete nvme_qpair. 00:15:43.910 11:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.910 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:43.910 11:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:43.910 11:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.910 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:43.910 11:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:43.910 11:19:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:45.285 11:19:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:45.285 11:19:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:45.285 11:19:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:45.285 11:19:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:45.285 11:19:26 -- common/autotest_common.sh@10 -- # set +x 00:15:45.286 11:19:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:45.286 11:19:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:45.286 11:19:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:45.286 11:19:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:45.286 11:19:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:46.222 11:19:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:46.222 11:19:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:46.222 11:19:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:46.222 11:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.222 11:19:27 -- common/autotest_common.sh@10 -- # set +x 00:15:46.222 11:19:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:46.222 11:19:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:46.222 11:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.222 11:19:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:46.222 11:19:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:47.163 11:19:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:47.163 11:19:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:47.163 11:19:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:47.163 11:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.163 11:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:47.163 11:19:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:47.163 11:19:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:47.163 11:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.163 11:19:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:47.163 11:19:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:48.099 11:19:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:48.099 11:19:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:48.099 11:19:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:48.099 11:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:48.099 11:19:29 -- common/autotest_common.sh@10 -- # set +x 00:15:48.099 11:19:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:48.099 11:19:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:48.099 11:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:48.099 11:19:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:48.099 11:19:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:49.474 11:19:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:49.474 11:19:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:49.474 11:19:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:49.474 11:19:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.474 11:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:49.474 11:19:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:49.474 11:19:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:49.474 11:19:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:49.474 11:19:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:49.474 [2024-10-13 11:19:30.741158] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:49.474 11:19:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:49.474 [2024-10-13 11:19:30.741667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.474 [2024-10-13 11:19:30.741787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.474 [2024-10-13 11:19:30.741885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.474 [2024-10-13 11:19:30.741978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.474 [2024-10-13 11:19:30.742071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.474 [2024-10-13 11:19:30.742170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.474 [2024-10-13 11:19:30.742271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.474 [2024-10-13 11:19:30.742341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.474 [2024-10-13 11:19:30.742440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.474 [2024-10-13 11:19:30.742519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.474 [2024-10-13 11:19:30.742606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde8de0 is same with the state(5) to be set 00:15:49.474 [2024-10-13 11:19:30.751155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde8de0 (9): Bad file descriptor 00:15:49.474 [2024-10-13 11:19:30.761193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:50.408 11:19:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:50.408 11:19:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:50.408 11:19:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:50.408 11:19:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.408 11:19:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:50.408 11:19:31 -- common/autotest_common.sh@10 -- # set +x 00:15:50.408 11:19:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:50.408 [2024-10-13 11:19:31.800480] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:51.369 [2024-10-13 11:19:32.824455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:15:52.305 [2024-10-13 11:19:33.848468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:15:52.305 [2024-10-13 11:19:33.848592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde8de0 with addr=10.0.0.2, port=4420 00:15:52.305 [2024-10-13 11:19:33.848627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde8de0 is same with the state(5) to be set 00:15:52.305 [2024-10-13 11:19:33.848681] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:52.305 [2024-10-13 11:19:33.848705] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:52.305 [2024-10-13 11:19:33.848724] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:52.305 [2024-10-13 11:19:33.848744] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:15:52.305 [2024-10-13 11:19:33.849618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde8de0 (9): Bad file descriptor 00:15:52.305 [2024-10-13 11:19:33.849695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:52.305 [2024-10-13 11:19:33.849758] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:15:52.305 [2024-10-13 11:19:33.849826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.306 [2024-10-13 11:19:33.849865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.306 [2024-10-13 11:19:33.849894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.306 [2024-10-13 11:19:33.849918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.306 [2024-10-13 11:19:33.849941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.306 [2024-10-13 11:19:33.849960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.306 [2024-10-13 11:19:33.849984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.306 [2024-10-13 11:19:33.850003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.306 [2024-10-13 11:19:33.850025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.306 [2024-10-13 11:19:33.850045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.306 [2024-10-13 11:19:33.850064] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:15:52.306 [2024-10-13 11:19:33.850095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde91f0 (9): Bad file descriptor 00:15:52.306 [2024-10-13 11:19:33.850750] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:52.306 [2024-10-13 11:19:33.850809] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:15:52.306 11:19:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:52.306 11:19:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:52.306 11:19:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:53.682 11:19:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:53.682 11:19:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.682 11:19:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:53.682 11:19:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.682 11:19:34 -- common/autotest_common.sh@10 -- # set +x 00:15:53.682 11:19:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:53.682 11:19:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:53.682 11:19:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.682 11:19:34 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:53.682 11:19:34 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:53.682 11:19:34 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:53.682 11:19:34 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:53.682 11:19:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:53.682 11:19:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.682 11:19:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:53.682 11:19:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.682 11:19:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:53.682 11:19:34 -- common/autotest_common.sh@10 -- # set +x 00:15:53.682 11:19:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:53.682 11:19:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.682 11:19:35 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:53.682 11:19:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:54.619 [2024-10-13 11:19:35.856812] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:54.619 [2024-10-13 11:19:35.856841] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:54.619 [2024-10-13 11:19:35.856874] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:54.619 [2024-10-13 11:19:35.862867] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:15:54.619 [2024-10-13 11:19:35.917886] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:54.619 [2024-10-13 11:19:35.917949] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:54.619 [2024-10-13 11:19:35.917970] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:54.619 [2024-10-13 11:19:35.917984] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:15:54.619 [2024-10-13 11:19:35.917993] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:54.619 [2024-10-13 11:19:35.925099] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xe2ace0 was disconnected and freed. delete nvme_qpair. 00:15:54.619 11:19:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:54.619 11:19:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.619 11:19:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:54.619 11:19:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:54.619 11:19:36 -- common/autotest_common.sh@10 -- # set +x 00:15:54.619 11:19:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:54.619 11:19:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:54.619 11:19:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:54.619 11:19:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:54.619 11:19:36 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:54.619 11:19:36 -- host/discovery_remove_ifc.sh@90 -- # killprocess 70862 00:15:54.619 11:19:36 -- common/autotest_common.sh@926 -- # '[' -z 70862 ']' 00:15:54.619 11:19:36 -- common/autotest_common.sh@930 -- # kill -0 70862 00:15:54.619 11:19:36 -- common/autotest_common.sh@931 -- # uname 00:15:54.619 11:19:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:54.619 11:19:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70862 00:15:54.619 11:19:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:54.619 11:19:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:54.619 killing process with pid 70862 00:15:54.619 11:19:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70862' 00:15:54.619 11:19:36 -- common/autotest_common.sh@945 -- # kill 70862 00:15:54.619 11:19:36 -- common/autotest_common.sh@950 -- # wait 70862 00:15:54.879 11:19:36 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:54.879 11:19:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:54.879 11:19:36 -- nvmf/common.sh@116 -- # sync 00:15:54.879 11:19:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:54.879 11:19:36 -- nvmf/common.sh@119 -- # set +e 00:15:54.879 11:19:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:54.879 11:19:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:54.879 rmmod nvme_tcp 00:15:54.879 rmmod nvme_fabrics 00:15:54.879 rmmod nvme_keyring 00:15:54.879 11:19:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:54.879 11:19:36 -- nvmf/common.sh@123 -- # set -e 00:15:54.879 11:19:36 -- nvmf/common.sh@124 -- # return 0 00:15:54.879 11:19:36 -- nvmf/common.sh@477 -- # '[' -n 70830 ']' 00:15:54.879 11:19:36 -- nvmf/common.sh@478 -- # killprocess 70830 00:15:54.879 11:19:36 -- common/autotest_common.sh@926 -- # '[' -z 70830 ']' 00:15:54.879 11:19:36 -- common/autotest_common.sh@930 -- # kill -0 70830 00:15:54.879 11:19:36 -- common/autotest_common.sh@931 -- # uname 00:15:54.879 11:19:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:54.879 11:19:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70830 00:15:54.879 killing process with pid 70830 00:15:54.879 11:19:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:54.879 11:19:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:54.879 11:19:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70830' 00:15:54.879 11:19:36 -- common/autotest_common.sh@945 -- # kill 70830 00:15:54.879 11:19:36 -- common/autotest_common.sh@950 -- # wait 70830 00:15:55.138 11:19:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:55.138 11:19:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:55.138 11:19:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:55.138 11:19:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:55.138 11:19:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:55.138 11:19:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.138 11:19:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.138 11:19:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.138 11:19:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:55.138 00:15:55.138 real 0m14.382s 00:15:55.138 user 0m22.651s 00:15:55.138 sys 0m2.513s 00:15:55.138 11:19:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.138 11:19:36 -- common/autotest_common.sh@10 -- # set +x 00:15:55.138 ************************************ 00:15:55.138 END TEST nvmf_discovery_remove_ifc 00:15:55.138 ************************************ 00:15:55.138 11:19:36 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:15:55.138 11:19:36 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:55.138 11:19:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:55.138 11:19:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:55.138 11:19:36 -- common/autotest_common.sh@10 -- # set +x 00:15:55.138 ************************************ 00:15:55.138 START TEST nvmf_digest 00:15:55.138 ************************************ 00:15:55.138 11:19:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:55.398 * Looking for test storage... 00:15:55.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:55.398 11:19:36 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:55.398 11:19:36 -- nvmf/common.sh@7 -- # uname -s 00:15:55.398 11:19:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.398 11:19:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.398 11:19:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.398 11:19:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.398 11:19:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.398 11:19:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.398 11:19:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.398 11:19:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.398 11:19:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.398 11:19:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.398 11:19:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:15:55.398 11:19:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:15:55.398 11:19:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.398 11:19:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.398 11:19:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:55.398 11:19:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.398 11:19:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.398 11:19:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.398 11:19:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.398 11:19:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.398 11:19:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.398 11:19:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.398 11:19:36 -- paths/export.sh@5 -- # export PATH 00:15:55.398 11:19:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.398 11:19:36 -- nvmf/common.sh@46 -- # : 0 00:15:55.398 11:19:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:55.398 11:19:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:55.398 11:19:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:55.398 11:19:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.398 11:19:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.398 11:19:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:55.398 11:19:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:55.398 11:19:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:55.398 11:19:36 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:55.398 11:19:36 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:15:55.398 11:19:36 -- host/digest.sh@16 -- # runtime=2 00:15:55.398 11:19:36 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:15:55.398 11:19:36 -- host/digest.sh@132 -- # nvmftestinit 00:15:55.398 11:19:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:55.398 11:19:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.398 11:19:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:55.398 11:19:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:55.398 11:19:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:55.398 11:19:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.398 11:19:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.398 11:19:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.398 11:19:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:55.398 11:19:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:55.398 11:19:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:55.398 11:19:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:55.398 11:19:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:55.398 11:19:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:55.398 11:19:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.398 11:19:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.398 11:19:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:55.398 11:19:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:55.398 11:19:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:55.398 11:19:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:55.398 11:19:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:55.398 11:19:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.398 11:19:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:55.398 11:19:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:55.398 11:19:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:55.398 11:19:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:55.398 11:19:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:55.398 11:19:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:55.398 Cannot find device "nvmf_tgt_br" 00:15:55.398 11:19:36 -- nvmf/common.sh@154 -- # true 00:15:55.398 11:19:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.398 Cannot find device "nvmf_tgt_br2" 00:15:55.398 11:19:36 -- nvmf/common.sh@155 -- # true 00:15:55.398 11:19:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:55.398 11:19:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:55.398 Cannot find device "nvmf_tgt_br" 00:15:55.398 11:19:36 -- nvmf/common.sh@157 -- # true 00:15:55.398 11:19:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:55.398 Cannot find device "nvmf_tgt_br2" 00:15:55.398 11:19:36 -- nvmf/common.sh@158 -- # true 00:15:55.398 11:19:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:55.398 11:19:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:55.398 11:19:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.398 11:19:36 -- nvmf/common.sh@161 -- # true 00:15:55.398 11:19:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.398 11:19:36 -- nvmf/common.sh@162 -- # true 00:15:55.398 11:19:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:55.398 11:19:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:55.398 11:19:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:55.398 11:19:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:55.398 11:19:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:55.657 11:19:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:55.657 11:19:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:55.657 11:19:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:55.657 11:19:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:55.657 11:19:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:55.657 11:19:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:55.657 11:19:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:55.657 11:19:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:55.657 11:19:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:55.657 11:19:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:55.657 11:19:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:55.657 11:19:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:55.657 11:19:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:55.657 11:19:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:55.657 11:19:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:55.657 11:19:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:55.657 11:19:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:55.657 11:19:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:55.658 11:19:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:55.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:55.658 00:15:55.658 --- 10.0.0.2 ping statistics --- 00:15:55.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.658 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:55.658 11:19:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:55.658 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:55.658 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:15:55.658 00:15:55.658 --- 10.0.0.3 ping statistics --- 00:15:55.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.658 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:55.658 11:19:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:55.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:55.658 00:15:55.658 --- 10.0.0.1 ping statistics --- 00:15:55.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.658 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:55.658 11:19:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.658 11:19:37 -- nvmf/common.sh@421 -- # return 0 00:15:55.658 11:19:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:55.658 11:19:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.658 11:19:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:55.658 11:19:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:55.658 11:19:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.658 11:19:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:55.658 11:19:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:55.658 11:19:37 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:55.658 11:19:37 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:15:55.658 11:19:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:55.658 11:19:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:55.658 11:19:37 -- common/autotest_common.sh@10 -- # set +x 00:15:55.658 ************************************ 00:15:55.658 START TEST nvmf_digest_clean 00:15:55.658 ************************************ 00:15:55.658 11:19:37 -- common/autotest_common.sh@1104 -- # run_digest 00:15:55.658 11:19:37 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:15:55.658 11:19:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:55.658 11:19:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:55.658 11:19:37 -- common/autotest_common.sh@10 -- # set +x 00:15:55.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.658 11:19:37 -- nvmf/common.sh@469 -- # nvmfpid=71265 00:15:55.658 11:19:37 -- nvmf/common.sh@470 -- # waitforlisten 71265 00:15:55.658 11:19:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:55.658 11:19:37 -- common/autotest_common.sh@819 -- # '[' -z 71265 ']' 00:15:55.658 11:19:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.658 11:19:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:55.658 11:19:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.658 11:19:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:55.658 11:19:37 -- common/autotest_common.sh@10 -- # set +x 00:15:55.658 [2024-10-13 11:19:37.248009] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:55.658 [2024-10-13 11:19:37.248114] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.917 [2024-10-13 11:19:37.389828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.917 [2024-10-13 11:19:37.457551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:55.917 [2024-10-13 11:19:37.457718] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.917 [2024-10-13 11:19:37.457743] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.917 [2024-10-13 11:19:37.457754] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.917 [2024-10-13 11:19:37.457789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.917 11:19:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:55.917 11:19:37 -- common/autotest_common.sh@852 -- # return 0 00:15:55.917 11:19:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:55.917 11:19:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:55.917 11:19:37 -- common/autotest_common.sh@10 -- # set +x 00:15:56.176 11:19:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.176 11:19:37 -- host/digest.sh@120 -- # common_target_config 00:15:56.176 11:19:37 -- host/digest.sh@43 -- # rpc_cmd 00:15:56.176 11:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:56.176 11:19:37 -- common/autotest_common.sh@10 -- # set +x 00:15:56.176 null0 00:15:56.176 [2024-10-13 11:19:37.607605] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.176 [2024-10-13 11:19:37.631762] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.176 11:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:56.176 11:19:37 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:15:56.176 11:19:37 -- host/digest.sh@77 -- # local rw bs qd 00:15:56.176 11:19:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:56.176 11:19:37 -- host/digest.sh@80 -- # rw=randread 00:15:56.176 11:19:37 -- host/digest.sh@80 -- # bs=4096 00:15:56.176 11:19:37 -- host/digest.sh@80 -- # qd=128 00:15:56.176 11:19:37 -- host/digest.sh@82 -- # bperfpid=71288 00:15:56.176 11:19:37 -- host/digest.sh@83 -- # waitforlisten 71288 /var/tmp/bperf.sock 00:15:56.176 11:19:37 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:15:56.176 11:19:37 -- common/autotest_common.sh@819 -- # '[' -z 71288 ']' 00:15:56.176 11:19:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:56.176 11:19:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:56.176 11:19:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:56.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:56.176 11:19:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:56.176 11:19:37 -- common/autotest_common.sh@10 -- # set +x 00:15:56.176 [2024-10-13 11:19:37.692619] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:56.176 [2024-10-13 11:19:37.692918] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71288 ] 00:15:56.434 [2024-10-13 11:19:37.832551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.434 [2024-10-13 11:19:37.886717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.434 11:19:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:56.434 11:19:37 -- common/autotest_common.sh@852 -- # return 0 00:15:56.434 11:19:37 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:15:56.434 11:19:37 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:15:56.434 11:19:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:56.693 11:19:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:56.693 11:19:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:56.951 nvme0n1 00:15:56.951 11:19:38 -- host/digest.sh@91 -- # bperf_py perform_tests 00:15:56.951 11:19:38 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:57.210 Running I/O for 2 seconds... 00:15:59.114 00:15:59.114 Latency(us) 00:15:59.114 [2024-10-13T11:19:40.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.114 [2024-10-13T11:19:40.716Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:15:59.114 nvme0n1 : 2.01 16698.91 65.23 0.00 0.00 7659.75 6911.07 18945.86 00:15:59.114 [2024-10-13T11:19:40.716Z] =================================================================================================================== 00:15:59.114 [2024-10-13T11:19:40.716Z] Total : 16698.91 65.23 0.00 0.00 7659.75 6911.07 18945.86 00:15:59.114 0 00:15:59.114 11:19:40 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:15:59.114 11:19:40 -- host/digest.sh@92 -- # get_accel_stats 00:15:59.114 11:19:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:59.114 11:19:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:59.114 11:19:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:59.114 | select(.opcode=="crc32c") 00:15:59.114 | "\(.module_name) \(.executed)"' 00:15:59.372 11:19:40 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:15:59.372 11:19:40 -- host/digest.sh@93 -- # exp_module=software 00:15:59.372 11:19:40 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:15:59.372 11:19:40 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:59.372 11:19:40 -- host/digest.sh@97 -- # killprocess 71288 00:15:59.372 11:19:40 -- common/autotest_common.sh@926 -- # '[' -z 71288 ']' 00:15:59.372 11:19:40 -- common/autotest_common.sh@930 -- # kill -0 71288 00:15:59.372 11:19:40 -- common/autotest_common.sh@931 -- # uname 00:15:59.373 11:19:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:59.373 11:19:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71288 00:15:59.373 killing process with pid 71288 00:15:59.373 Received shutdown signal, test time was about 2.000000 seconds 00:15:59.373 00:15:59.373 Latency(us) 00:15:59.373 [2024-10-13T11:19:40.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.373 [2024-10-13T11:19:40.975Z] =================================================================================================================== 00:15:59.373 [2024-10-13T11:19:40.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:59.373 11:19:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:59.373 11:19:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:59.373 11:19:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71288' 00:15:59.373 11:19:40 -- common/autotest_common.sh@945 -- # kill 71288 00:15:59.373 11:19:40 -- common/autotest_common.sh@950 -- # wait 71288 00:15:59.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:59.632 11:19:41 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:15:59.633 11:19:41 -- host/digest.sh@77 -- # local rw bs qd 00:15:59.633 11:19:41 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:59.633 11:19:41 -- host/digest.sh@80 -- # rw=randread 00:15:59.633 11:19:41 -- host/digest.sh@80 -- # bs=131072 00:15:59.633 11:19:41 -- host/digest.sh@80 -- # qd=16 00:15:59.633 11:19:41 -- host/digest.sh@82 -- # bperfpid=71341 00:15:59.633 11:19:41 -- host/digest.sh@83 -- # waitforlisten 71341 /var/tmp/bperf.sock 00:15:59.633 11:19:41 -- common/autotest_common.sh@819 -- # '[' -z 71341 ']' 00:15:59.633 11:19:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:59.633 11:19:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:59.633 11:19:41 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:15:59.633 11:19:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:59.633 11:19:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:59.633 11:19:41 -- common/autotest_common.sh@10 -- # set +x 00:15:59.633 [2024-10-13 11:19:41.183348] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:59.633 [2024-10-13 11:19:41.183651] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefixI/O size of 131072 is greater than zero copy threshold (65536). 00:15:59.633 Zero copy mechanism will not be used. 00:15:59.633 =spdk_pid71341 ] 00:15:59.892 [2024-10-13 11:19:41.323165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.892 [2024-10-13 11:19:41.377642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.892 11:19:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:59.892 11:19:41 -- common/autotest_common.sh@852 -- # return 0 00:15:59.892 11:19:41 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:15:59.892 11:19:41 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:15:59.892 11:19:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:00.150 11:19:41 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:00.150 11:19:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:00.718 nvme0n1 00:16:00.718 11:19:42 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:00.718 11:19:42 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:00.718 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:00.718 Zero copy mechanism will not be used. 00:16:00.718 Running I/O for 2 seconds... 00:16:02.623 00:16:02.623 Latency(us) 00:16:02.623 [2024-10-13T11:19:44.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.623 [2024-10-13T11:19:44.225Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:02.623 nvme0n1 : 2.00 8112.87 1014.11 0.00 0.00 1969.42 1735.21 5272.67 00:16:02.623 [2024-10-13T11:19:44.225Z] =================================================================================================================== 00:16:02.623 [2024-10-13T11:19:44.225Z] Total : 8112.87 1014.11 0.00 0.00 1969.42 1735.21 5272.67 00:16:02.623 0 00:16:02.623 11:19:44 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:02.623 11:19:44 -- host/digest.sh@92 -- # get_accel_stats 00:16:02.623 11:19:44 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:02.623 11:19:44 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:02.623 | select(.opcode=="crc32c") 00:16:02.623 | "\(.module_name) \(.executed)"' 00:16:02.623 11:19:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:02.883 11:19:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:02.883 11:19:44 -- host/digest.sh@93 -- # exp_module=software 00:16:02.883 11:19:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:02.883 11:19:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:02.883 11:19:44 -- host/digest.sh@97 -- # killprocess 71341 00:16:02.883 11:19:44 -- common/autotest_common.sh@926 -- # '[' -z 71341 ']' 00:16:02.883 11:19:44 -- common/autotest_common.sh@930 -- # kill -0 71341 00:16:02.883 11:19:44 -- common/autotest_common.sh@931 -- # uname 00:16:02.883 11:19:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:02.883 11:19:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71341 00:16:03.142 killing process with pid 71341 00:16:03.142 Received shutdown signal, test time was about 2.000000 seconds 00:16:03.142 00:16:03.142 Latency(us) 00:16:03.142 [2024-10-13T11:19:44.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.142 [2024-10-13T11:19:44.744Z] =================================================================================================================== 00:16:03.142 [2024-10-13T11:19:44.744Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:03.142 11:19:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:03.142 11:19:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:03.142 11:19:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71341' 00:16:03.142 11:19:44 -- common/autotest_common.sh@945 -- # kill 71341 00:16:03.142 11:19:44 -- common/autotest_common.sh@950 -- # wait 71341 00:16:03.142 11:19:44 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:16:03.142 11:19:44 -- host/digest.sh@77 -- # local rw bs qd 00:16:03.142 11:19:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:03.142 11:19:44 -- host/digest.sh@80 -- # rw=randwrite 00:16:03.142 11:19:44 -- host/digest.sh@80 -- # bs=4096 00:16:03.142 11:19:44 -- host/digest.sh@80 -- # qd=128 00:16:03.142 11:19:44 -- host/digest.sh@82 -- # bperfpid=71388 00:16:03.142 11:19:44 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:03.142 11:19:44 -- host/digest.sh@83 -- # waitforlisten 71388 /var/tmp/bperf.sock 00:16:03.142 11:19:44 -- common/autotest_common.sh@819 -- # '[' -z 71388 ']' 00:16:03.142 11:19:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:03.142 11:19:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:03.142 11:19:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:03.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:03.142 11:19:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:03.142 11:19:44 -- common/autotest_common.sh@10 -- # set +x 00:16:03.142 [2024-10-13 11:19:44.720924] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:03.142 [2024-10-13 11:19:44.721134] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71388 ] 00:16:03.401 [2024-10-13 11:19:44.854837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.401 [2024-10-13 11:19:44.906836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.401 11:19:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:03.401 11:19:44 -- common/autotest_common.sh@852 -- # return 0 00:16:03.401 11:19:44 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:03.401 11:19:44 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:03.401 11:19:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:03.969 11:19:45 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:03.969 11:19:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:03.969 nvme0n1 00:16:04.228 11:19:45 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:04.228 11:19:45 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:04.228 Running I/O for 2 seconds... 00:16:06.152 00:16:06.152 Latency(us) 00:16:06.152 [2024-10-13T11:19:47.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.152 [2024-10-13T11:19:47.754Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:06.152 nvme0n1 : 2.00 17551.81 68.56 0.00 0.00 7286.54 6464.23 15252.01 00:16:06.152 [2024-10-13T11:19:47.754Z] =================================================================================================================== 00:16:06.152 [2024-10-13T11:19:47.754Z] Total : 17551.81 68.56 0.00 0.00 7286.54 6464.23 15252.01 00:16:06.152 0 00:16:06.152 11:19:47 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:06.152 11:19:47 -- host/digest.sh@92 -- # get_accel_stats 00:16:06.152 11:19:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:06.152 | select(.opcode=="crc32c") 00:16:06.152 | "\(.module_name) \(.executed)"' 00:16:06.152 11:19:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:06.152 11:19:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:06.731 11:19:48 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:06.731 11:19:48 -- host/digest.sh@93 -- # exp_module=software 00:16:06.731 11:19:48 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:06.731 11:19:48 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:06.731 11:19:48 -- host/digest.sh@97 -- # killprocess 71388 00:16:06.731 11:19:48 -- common/autotest_common.sh@926 -- # '[' -z 71388 ']' 00:16:06.731 11:19:48 -- common/autotest_common.sh@930 -- # kill -0 71388 00:16:06.731 11:19:48 -- common/autotest_common.sh@931 -- # uname 00:16:06.731 11:19:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:06.731 11:19:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71388 00:16:06.731 killing process with pid 71388 00:16:06.731 Received shutdown signal, test time was about 2.000000 seconds 00:16:06.731 00:16:06.731 Latency(us) 00:16:06.731 [2024-10-13T11:19:48.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.731 [2024-10-13T11:19:48.333Z] =================================================================================================================== 00:16:06.731 [2024-10-13T11:19:48.333Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:06.731 11:19:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:06.732 11:19:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:06.732 11:19:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71388' 00:16:06.732 11:19:48 -- common/autotest_common.sh@945 -- # kill 71388 00:16:06.732 11:19:48 -- common/autotest_common.sh@950 -- # wait 71388 00:16:06.732 11:19:48 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:16:06.732 11:19:48 -- host/digest.sh@77 -- # local rw bs qd 00:16:06.732 11:19:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:06.732 11:19:48 -- host/digest.sh@80 -- # rw=randwrite 00:16:06.732 11:19:48 -- host/digest.sh@80 -- # bs=131072 00:16:06.732 11:19:48 -- host/digest.sh@80 -- # qd=16 00:16:06.732 11:19:48 -- host/digest.sh@82 -- # bperfpid=71443 00:16:06.732 11:19:48 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:06.732 11:19:48 -- host/digest.sh@83 -- # waitforlisten 71443 /var/tmp/bperf.sock 00:16:06.732 11:19:48 -- common/autotest_common.sh@819 -- # '[' -z 71443 ']' 00:16:06.732 11:19:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:06.732 11:19:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:06.732 11:19:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:06.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:06.732 11:19:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:06.732 11:19:48 -- common/autotest_common.sh@10 -- # set +x 00:16:06.732 [2024-10-13 11:19:48.314791] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:06.732 [2024-10-13 11:19:48.315166] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71443 ] 00:16:06.732 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:06.732 Zero copy mechanism will not be used. 00:16:06.991 [2024-10-13 11:19:48.454563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.991 [2024-10-13 11:19:48.504345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.991 11:19:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:06.991 11:19:48 -- common/autotest_common.sh@852 -- # return 0 00:16:06.991 11:19:48 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:06.991 11:19:48 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:06.991 11:19:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:07.250 11:19:48 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:07.250 11:19:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:07.509 nvme0n1 00:16:07.768 11:19:49 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:07.768 11:19:49 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:07.768 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:07.768 Zero copy mechanism will not be used. 00:16:07.768 Running I/O for 2 seconds... 00:16:09.672 00:16:09.672 Latency(us) 00:16:09.672 [2024-10-13T11:19:51.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.672 [2024-10-13T11:19:51.274Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:09.672 nvme0n1 : 2.00 6882.67 860.33 0.00 0.00 2319.82 1690.53 10426.18 00:16:09.672 [2024-10-13T11:19:51.274Z] =================================================================================================================== 00:16:09.672 [2024-10-13T11:19:51.274Z] Total : 6882.67 860.33 0.00 0.00 2319.82 1690.53 10426.18 00:16:09.672 0 00:16:09.672 11:19:51 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:09.672 11:19:51 -- host/digest.sh@92 -- # get_accel_stats 00:16:09.672 11:19:51 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:09.673 11:19:51 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:09.673 | select(.opcode=="crc32c") 00:16:09.673 | "\(.module_name) \(.executed)"' 00:16:09.673 11:19:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:09.931 11:19:51 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:09.931 11:19:51 -- host/digest.sh@93 -- # exp_module=software 00:16:09.931 11:19:51 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:09.931 11:19:51 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:09.931 11:19:51 -- host/digest.sh@97 -- # killprocess 71443 00:16:09.931 11:19:51 -- common/autotest_common.sh@926 -- # '[' -z 71443 ']' 00:16:09.931 11:19:51 -- common/autotest_common.sh@930 -- # kill -0 71443 00:16:09.931 11:19:51 -- common/autotest_common.sh@931 -- # uname 00:16:09.931 11:19:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:09.931 11:19:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71443 00:16:10.190 killing process with pid 71443 00:16:10.190 Received shutdown signal, test time was about 2.000000 seconds 00:16:10.190 00:16:10.190 Latency(us) 00:16:10.190 [2024-10-13T11:19:51.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.190 [2024-10-13T11:19:51.792Z] =================================================================================================================== 00:16:10.190 [2024-10-13T11:19:51.792Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:10.190 11:19:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:10.190 11:19:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:10.190 11:19:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71443' 00:16:10.190 11:19:51 -- common/autotest_common.sh@945 -- # kill 71443 00:16:10.190 11:19:51 -- common/autotest_common.sh@950 -- # wait 71443 00:16:10.190 11:19:51 -- host/digest.sh@126 -- # killprocess 71265 00:16:10.190 11:19:51 -- common/autotest_common.sh@926 -- # '[' -z 71265 ']' 00:16:10.191 11:19:51 -- common/autotest_common.sh@930 -- # kill -0 71265 00:16:10.191 11:19:51 -- common/autotest_common.sh@931 -- # uname 00:16:10.191 11:19:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:10.191 11:19:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71265 00:16:10.191 killing process with pid 71265 00:16:10.191 11:19:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:10.191 11:19:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:10.191 11:19:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71265' 00:16:10.191 11:19:51 -- common/autotest_common.sh@945 -- # kill 71265 00:16:10.191 11:19:51 -- common/autotest_common.sh@950 -- # wait 71265 00:16:10.449 ************************************ 00:16:10.449 END TEST nvmf_digest_clean 00:16:10.449 ************************************ 00:16:10.449 00:16:10.449 real 0m14.750s 00:16:10.449 user 0m28.546s 00:16:10.449 sys 0m4.245s 00:16:10.449 11:19:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.449 11:19:51 -- common/autotest_common.sh@10 -- # set +x 00:16:10.449 11:19:51 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:16:10.449 11:19:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:10.449 11:19:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.449 11:19:51 -- common/autotest_common.sh@10 -- # set +x 00:16:10.449 ************************************ 00:16:10.449 START TEST nvmf_digest_error 00:16:10.449 ************************************ 00:16:10.449 11:19:51 -- common/autotest_common.sh@1104 -- # run_digest_error 00:16:10.449 11:19:51 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:16:10.449 11:19:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:10.449 11:19:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:10.449 11:19:51 -- common/autotest_common.sh@10 -- # set +x 00:16:10.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.449 11:19:51 -- nvmf/common.sh@469 -- # nvmfpid=71513 00:16:10.449 11:19:51 -- nvmf/common.sh@470 -- # waitforlisten 71513 00:16:10.449 11:19:51 -- common/autotest_common.sh@819 -- # '[' -z 71513 ']' 00:16:10.449 11:19:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:10.449 11:19:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.449 11:19:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:10.449 11:19:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.449 11:19:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:10.449 11:19:51 -- common/autotest_common.sh@10 -- # set +x 00:16:10.708 [2024-10-13 11:19:52.048803] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:10.708 [2024-10-13 11:19:52.048935] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.708 [2024-10-13 11:19:52.183573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.708 [2024-10-13 11:19:52.236943] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:10.708 [2024-10-13 11:19:52.237087] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.708 [2024-10-13 11:19:52.237099] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.708 [2024-10-13 11:19:52.237107] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.708 [2024-10-13 11:19:52.237134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.708 11:19:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:10.708 11:19:52 -- common/autotest_common.sh@852 -- # return 0 00:16:10.708 11:19:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:10.708 11:19:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:10.708 11:19:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.967 11:19:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.967 11:19:52 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:10.967 11:19:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.967 11:19:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.967 [2024-10-13 11:19:52.321543] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:10.967 11:19:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.967 11:19:52 -- host/digest.sh@104 -- # common_target_config 00:16:10.967 11:19:52 -- host/digest.sh@43 -- # rpc_cmd 00:16:10.967 11:19:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.967 11:19:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.967 null0 00:16:10.967 [2024-10-13 11:19:52.389890] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.967 [2024-10-13 11:19:52.413985] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.967 11:19:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.967 11:19:52 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:16:10.967 11:19:52 -- host/digest.sh@54 -- # local rw bs qd 00:16:10.967 11:19:52 -- host/digest.sh@56 -- # rw=randread 00:16:10.967 11:19:52 -- host/digest.sh@56 -- # bs=4096 00:16:10.967 11:19:52 -- host/digest.sh@56 -- # qd=128 00:16:10.967 11:19:52 -- host/digest.sh@58 -- # bperfpid=71538 00:16:10.967 11:19:52 -- host/digest.sh@60 -- # waitforlisten 71538 /var/tmp/bperf.sock 00:16:10.967 11:19:52 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:10.967 11:19:52 -- common/autotest_common.sh@819 -- # '[' -z 71538 ']' 00:16:10.967 11:19:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:10.967 11:19:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:10.967 11:19:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:10.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:10.967 11:19:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:10.967 11:19:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.967 [2024-10-13 11:19:52.464677] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:10.967 [2024-10-13 11:19:52.464933] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71538 ] 00:16:11.226 [2024-10-13 11:19:52.593491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.226 [2024-10-13 11:19:52.646261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.226 11:19:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:11.226 11:19:52 -- common/autotest_common.sh@852 -- # return 0 00:16:11.226 11:19:52 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:11.226 11:19:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:11.485 11:19:52 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:11.485 11:19:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.485 11:19:52 -- common/autotest_common.sh@10 -- # set +x 00:16:11.485 11:19:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.485 11:19:52 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:11.485 11:19:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:11.744 nvme0n1 00:16:11.744 11:19:53 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:11.744 11:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.744 11:19:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.744 11:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.744 11:19:53 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:11.744 11:19:53 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:12.003 Running I/O for 2 seconds... 00:16:12.003 [2024-10-13 11:19:53.382002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.003 [2024-10-13 11:19:53.382051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.003 [2024-10-13 11:19:53.382081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.003 [2024-10-13 11:19:53.397262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.003 [2024-10-13 11:19:53.397481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.003 [2024-10-13 11:19:53.397499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.003 [2024-10-13 11:19:53.412343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.003 [2024-10-13 11:19:53.412394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.003 [2024-10-13 11:19:53.412424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.003 [2024-10-13 11:19:53.427208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.003 [2024-10-13 11:19:53.427410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.003 [2024-10-13 11:19:53.427428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.003 [2024-10-13 11:19:53.442128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.003 [2024-10-13 11:19:53.442164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.003 [2024-10-13 11:19:53.442194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.003 [2024-10-13 11:19:53.457010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.003 [2024-10-13 11:19:53.457045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.003 [2024-10-13 11:19:53.457074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.003 [2024-10-13 11:19:53.472133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.003 [2024-10-13 11:19:53.472168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.003 [2024-10-13 11:19:53.472197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.003 [2024-10-13 11:19:53.487360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.003 [2024-10-13 11:19:53.487570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.003 [2024-10-13 11:19:53.487587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.003 [2024-10-13 11:19:53.502578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.003 [2024-10-13 11:19:53.502776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.003 [2024-10-13 11:19:53.502795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.003 [2024-10-13 11:19:53.517901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.003 [2024-10-13 11:19:53.518071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.003 [2024-10-13 11:19:53.518089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.003 [2024-10-13 11:19:53.535272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.003 [2024-10-13 11:19:53.535310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.004 [2024-10-13 11:19:53.535386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.004 [2024-10-13 11:19:53.551895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.004 [2024-10-13 11:19:53.551931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.004 [2024-10-13 11:19:53.551959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.004 [2024-10-13 11:19:53.567733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.004 [2024-10-13 11:19:53.567770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.004 [2024-10-13 11:19:53.567799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.004 [2024-10-13 11:19:53.582897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.004 [2024-10-13 11:19:53.583095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.004 [2024-10-13 11:19:53.583112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.004 [2024-10-13 11:19:53.599811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.004 [2024-10-13 11:19:53.599881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.004 [2024-10-13 11:19:53.599894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.615686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.615720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.615749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.630625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.630660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.630712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.645930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.645965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.645993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.661107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.661292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.661308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.676430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.676468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.676497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.691478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.691513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.691541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.706421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.706455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.706483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.721443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.721477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.721505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.736484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.736657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.736674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.751683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.751718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.751746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.766765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.766942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.766960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.784220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.784256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.784285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.801434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.801474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.801488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.818711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.818750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.818765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.836408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.836446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.836460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.263 [2024-10-13 11:19:53.854284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.263 [2024-10-13 11:19:53.854334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.263 [2024-10-13 11:19:53.854382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:53.872319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:53.872399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:53.872430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:53.889082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:53.889133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:53.889146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:53.905307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:53.905519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:53.905649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:53.921876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:53.922063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:53.922207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:53.938288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:53.938516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:53.938649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:53.955344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:53.955519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:53.955538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:53.971290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:53.971336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:53.971366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:53.986899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:53.986937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:53.986952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:54.002585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:54.002784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:54.002932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:54.018809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:54.019126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:54.019383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:54.034851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:54.035182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:54.035410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:54.051436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:54.051677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:54.051876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:54.067090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:54.067244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:54.067316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:54.081933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:54.082171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:54.082251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:54.097688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:54.097912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:54.098012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.523 [2024-10-13 11:19:54.115139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.523 [2024-10-13 11:19:54.115415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-10-13 11:19:54.115527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.132497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.132707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.132805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.147995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.148219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.148318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.163490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.163758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.163854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.178827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.179120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.179200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.194252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.194533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.194600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.209703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.209961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.210038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.225921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.226136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.226242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.243895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.244119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.244202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.261146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.261414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.261526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.277445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.277692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.277795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.292990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.293202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.293314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.308562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.308773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.308873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.324481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.324694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.324823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.340064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.340274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.340429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.355575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.355804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.783 [2024-10-13 11:19:54.355904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.783 [2024-10-13 11:19:54.372313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:12.783 [2024-10-13 11:19:54.372470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.784 [2024-10-13 11:19:54.372566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.395377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.395516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.043 [2024-10-13 11:19:54.395606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.410662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.410915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.043 [2024-10-13 11:19:54.411052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.426015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.426274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.043 [2024-10-13 11:19:54.426430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.441378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.441638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.043 [2024-10-13 11:19:54.441891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.457033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.457275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.043 [2024-10-13 11:19:54.457522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.472635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.473047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.043 [2024-10-13 11:19:54.473267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.488306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.488626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.043 [2024-10-13 11:19:54.488981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.504925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.505209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.043 [2024-10-13 11:19:54.505540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.521993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.522249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.043 [2024-10-13 11:19:54.522495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.537839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.537927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.043 [2024-10-13 11:19:54.538012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.552834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.552871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.043 [2024-10-13 11:19:54.552901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.567692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.567727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.043 [2024-10-13 11:19:54.567756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.582494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.582532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.043 [2024-10-13 11:19:54.582546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.597517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.597553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.043 [2024-10-13 11:19:54.597582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.043 [2024-10-13 11:19:54.612273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.043 [2024-10-13 11:19:54.612308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.044 [2024-10-13 11:19:54.612349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.044 [2024-10-13 11:19:54.627887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.044 [2024-10-13 11:19:54.627923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.044 [2024-10-13 11:19:54.627954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.304 [2024-10-13 11:19:54.644021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.304 [2024-10-13 11:19:54.644056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.304 [2024-10-13 11:19:54.644085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.304 [2024-10-13 11:19:54.659430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.304 [2024-10-13 11:19:54.659465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.304 [2024-10-13 11:19:54.659494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.304 [2024-10-13 11:19:54.674166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.304 [2024-10-13 11:19:54.674201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.304 [2024-10-13 11:19:54.674229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.304 [2024-10-13 11:19:54.688996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.304 [2024-10-13 11:19:54.689031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.304 [2024-10-13 11:19:54.689060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.304 [2024-10-13 11:19:54.704551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.304 [2024-10-13 11:19:54.704593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.304 [2024-10-13 11:19:54.704624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.304 [2024-10-13 11:19:54.721785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.304 [2024-10-13 11:19:54.721820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.304 [2024-10-13 11:19:54.721849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.304 [2024-10-13 11:19:54.737665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.304 [2024-10-13 11:19:54.737700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.304 [2024-10-13 11:19:54.737729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.304 [2024-10-13 11:19:54.752789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.304 [2024-10-13 11:19:54.752825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.304 [2024-10-13 11:19:54.752854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.304 [2024-10-13 11:19:54.767967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.304 [2024-10-13 11:19:54.768002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.304 [2024-10-13 11:19:54.768031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.304 [2024-10-13 11:19:54.783081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.304 [2024-10-13 11:19:54.783273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.305 [2024-10-13 11:19:54.783307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.305 [2024-10-13 11:19:54.798452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.305 [2024-10-13 11:19:54.798643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.305 [2024-10-13 11:19:54.798875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.305 [2024-10-13 11:19:54.813934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.305 [2024-10-13 11:19:54.814120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.305 [2024-10-13 11:19:54.814254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.305 [2024-10-13 11:19:54.829849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.305 [2024-10-13 11:19:54.830023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.305 [2024-10-13 11:19:54.830157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.305 [2024-10-13 11:19:54.845418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.305 [2024-10-13 11:19:54.845603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.305 [2024-10-13 11:19:54.845743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.305 [2024-10-13 11:19:54.862232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.305 [2024-10-13 11:19:54.862446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.305 [2024-10-13 11:19:54.862575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.305 [2024-10-13 11:19:54.880042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.305 [2024-10-13 11:19:54.880245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.305 [2024-10-13 11:19:54.880533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.305 [2024-10-13 11:19:54.898062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.305 [2024-10-13 11:19:54.898291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.305 [2024-10-13 11:19:54.898450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.581 [2024-10-13 11:19:54.916236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.581 [2024-10-13 11:19:54.916440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.581 [2024-10-13 11:19:54.916565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.581 [2024-10-13 11:19:54.934791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.581 [2024-10-13 11:19:54.934969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.581 [2024-10-13 11:19:54.935111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.581 [2024-10-13 11:19:54.953687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.581 [2024-10-13 11:19:54.953898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.581 [2024-10-13 11:19:54.954030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.581 [2024-10-13 11:19:54.972032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.581 [2024-10-13 11:19:54.972222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.582 [2024-10-13 11:19:54.972394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.582 [2024-10-13 11:19:54.989912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.582 [2024-10-13 11:19:54.990133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.582 [2024-10-13 11:19:54.990285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.582 [2024-10-13 11:19:55.006649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.582 [2024-10-13 11:19:55.006876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.582 [2024-10-13 11:19:55.007092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.582 [2024-10-13 11:19:55.023695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.582 [2024-10-13 11:19:55.023909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.582 [2024-10-13 11:19:55.024054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.582 [2024-10-13 11:19:55.041405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.582 [2024-10-13 11:19:55.041589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.582 [2024-10-13 11:19:55.041768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.582 [2024-10-13 11:19:55.058435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.582 [2024-10-13 11:19:55.058610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.582 [2024-10-13 11:19:55.058794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.582 [2024-10-13 11:19:55.075044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.582 [2024-10-13 11:19:55.075216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.582 [2024-10-13 11:19:55.075372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.582 [2024-10-13 11:19:55.091777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.582 [2024-10-13 11:19:55.091948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.582 [2024-10-13 11:19:55.092092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.582 [2024-10-13 11:19:55.107980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.582 [2024-10-13 11:19:55.108129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.582 [2024-10-13 11:19:55.108163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.582 [2024-10-13 11:19:55.123869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.582 [2024-10-13 11:19:55.123905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.582 [2024-10-13 11:19:55.123935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.582 [2024-10-13 11:19:55.139531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.582 [2024-10-13 11:19:55.139566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.582 [2024-10-13 11:19:55.139595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.582 [2024-10-13 11:19:55.157162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.582 [2024-10-13 11:19:55.157335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.582 [2024-10-13 11:19:55.157386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.582 [2024-10-13 11:19:55.174494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.582 [2024-10-13 11:19:55.174663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.582 [2024-10-13 11:19:55.174693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.841 [2024-10-13 11:19:55.191294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.841 [2024-10-13 11:19:55.191375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.841 [2024-10-13 11:19:55.191406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.841 [2024-10-13 11:19:55.206497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.841 [2024-10-13 11:19:55.206665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.841 [2024-10-13 11:19:55.206726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.841 [2024-10-13 11:19:55.221664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.841 [2024-10-13 11:19:55.221865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.841 [2024-10-13 11:19:55.221898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.841 [2024-10-13 11:19:55.237480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.841 [2024-10-13 11:19:55.237517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.841 [2024-10-13 11:19:55.237547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.841 [2024-10-13 11:19:55.254051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.841 [2024-10-13 11:19:55.254088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.841 [2024-10-13 11:19:55.254119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.841 [2024-10-13 11:19:55.271627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.841 [2024-10-13 11:19:55.271666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.841 [2024-10-13 11:19:55.271681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.841 [2024-10-13 11:19:55.288427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.841 [2024-10-13 11:19:55.288474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.841 [2024-10-13 11:19:55.288504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.841 [2024-10-13 11:19:55.304333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.841 [2024-10-13 11:19:55.304394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.841 [2024-10-13 11:19:55.304424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.841 [2024-10-13 11:19:55.319370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.841 [2024-10-13 11:19:55.319590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.841 [2024-10-13 11:19:55.319624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.841 [2024-10-13 11:19:55.334653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.841 [2024-10-13 11:19:55.334882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.841 [2024-10-13 11:19:55.334917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.841 [2024-10-13 11:19:55.349867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.841 [2024-10-13 11:19:55.350034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.841 [2024-10-13 11:19:55.350067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.841 [2024-10-13 11:19:55.364991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2199d40) 00:16:13.841 [2024-10-13 11:19:55.365048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.841 [2024-10-13 11:19:55.365080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.841 00:16:13.841 Latency(us) 00:16:13.841 [2024-10-13T11:19:55.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.841 [2024-10-13T11:19:55.443Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:13.841 nvme0n1 : 2.01 15732.45 61.45 0.00 0.00 8129.67 7119.59 30027.40 00:16:13.841 [2024-10-13T11:19:55.443Z] =================================================================================================================== 00:16:13.841 [2024-10-13T11:19:55.443Z] Total : 15732.45 61.45 0.00 0.00 8129.67 7119.59 30027.40 00:16:13.841 0 00:16:13.841 11:19:55 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:13.841 11:19:55 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:13.841 11:19:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:13.841 11:19:55 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:13.841 | .driver_specific 00:16:13.841 | .nvme_error 00:16:13.841 | .status_code 00:16:13.841 | .command_transient_transport_error' 00:16:14.101 11:19:55 -- host/digest.sh@71 -- # (( 124 > 0 )) 00:16:14.101 11:19:55 -- host/digest.sh@73 -- # killprocess 71538 00:16:14.101 11:19:55 -- common/autotest_common.sh@926 -- # '[' -z 71538 ']' 00:16:14.101 11:19:55 -- common/autotest_common.sh@930 -- # kill -0 71538 00:16:14.101 11:19:55 -- common/autotest_common.sh@931 -- # uname 00:16:14.101 11:19:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:14.101 11:19:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71538 00:16:14.101 killing process with pid 71538 00:16:14.101 Received shutdown signal, test time was about 2.000000 seconds 00:16:14.101 00:16:14.101 Latency(us) 00:16:14.101 [2024-10-13T11:19:55.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.101 [2024-10-13T11:19:55.703Z] =================================================================================================================== 00:16:14.101 [2024-10-13T11:19:55.703Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:14.101 11:19:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:14.101 11:19:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:14.101 11:19:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71538' 00:16:14.101 11:19:55 -- common/autotest_common.sh@945 -- # kill 71538 00:16:14.101 11:19:55 -- common/autotest_common.sh@950 -- # wait 71538 00:16:14.359 11:19:55 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:16:14.359 11:19:55 -- host/digest.sh@54 -- # local rw bs qd 00:16:14.359 11:19:55 -- host/digest.sh@56 -- # rw=randread 00:16:14.359 11:19:55 -- host/digest.sh@56 -- # bs=131072 00:16:14.359 11:19:55 -- host/digest.sh@56 -- # qd=16 00:16:14.359 11:19:55 -- host/digest.sh@58 -- # bperfpid=71585 00:16:14.359 11:19:55 -- host/digest.sh@60 -- # waitforlisten 71585 /var/tmp/bperf.sock 00:16:14.359 11:19:55 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:14.359 11:19:55 -- common/autotest_common.sh@819 -- # '[' -z 71585 ']' 00:16:14.359 11:19:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:14.359 11:19:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:14.359 11:19:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:14.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:14.359 11:19:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:14.359 11:19:55 -- common/autotest_common.sh@10 -- # set +x 00:16:14.359 [2024-10-13 11:19:55.906374] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:14.359 [2024-10-13 11:19:55.906673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71585 ] 00:16:14.359 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:14.359 Zero copy mechanism will not be used. 00:16:14.618 [2024-10-13 11:19:56.041245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.618 [2024-10-13 11:19:56.094512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.554 11:19:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:15.554 11:19:56 -- common/autotest_common.sh@852 -- # return 0 00:16:15.554 11:19:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:15.555 11:19:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:15.555 11:19:57 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:15.555 11:19:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.555 11:19:57 -- common/autotest_common.sh@10 -- # set +x 00:16:15.555 11:19:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.555 11:19:57 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:15.555 11:19:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:15.813 nvme0n1 00:16:15.813 11:19:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:15.813 11:19:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.813 11:19:57 -- common/autotest_common.sh@10 -- # set +x 00:16:15.813 11:19:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.813 11:19:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:15.813 11:19:57 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:16.074 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:16.074 Zero copy mechanism will not be used. 00:16:16.074 Running I/O for 2 seconds... 00:16:16.074 [2024-10-13 11:19:57.476415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.476478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.476494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.480527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.480565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.480578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.484633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.484686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.484728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.488667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.488717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.488730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.492680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.492715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.492743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.496707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.496741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.496770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.500671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.500705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.500732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.504741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.504775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.504803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.508666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.508700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.508728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.512614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.512649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.512676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.516633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.516667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.516695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.520559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.520595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.520623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.524480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.524515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.524543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.528491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.528525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.528552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.532457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.532491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.532519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.536354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.536388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.536416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.540279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.540314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.540358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.544179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.544214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.544241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.548130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.548164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.548192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.552359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.552421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.552435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.556812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.556864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.556893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.561013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.561048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.074 [2024-10-13 11:19:57.561076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.074 [2024-10-13 11:19:57.565032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.074 [2024-10-13 11:19:57.565067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.565095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.569162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.569197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.569225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.573159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.573194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.573222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.577312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.577381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.577395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.581191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.581225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.581253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.585044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.585078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.585106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.588817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.588851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.588879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.592654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.592687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.592714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.596543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.596578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.596605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.600484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.600518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.600545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.604519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.604553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.604580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.608401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.608433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.608461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.612370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.612402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.612429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.616229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.616263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.616291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.620016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.620050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.620078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.623960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.623994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.624022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.627821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.627854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.627882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.631604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.631637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.631665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.635454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.635487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.635514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.639336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.639540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.639556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.643458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.643492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.643519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.647339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.647544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.647560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.651674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.651709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.651737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.655849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.655886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.655916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.660354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.660436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.660451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.664969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.665006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.665035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.075 [2024-10-13 11:19:57.669481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.075 [2024-10-13 11:19:57.669517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.075 [2024-10-13 11:19:57.669530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.673864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.673901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.673915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.678575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.678628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.678642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.682670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.682731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.682745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.686745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.686796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.686808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.690783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.690822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.690835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.694913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.694952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.694980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.698928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.698980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.699007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.702961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.703012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.703024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.706961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.707057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.707070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.711073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.711123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.711164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.715085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.715119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.715146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.719086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.719149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.719176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.723060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.723110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.723123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.727131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.727180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.727208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.336 [2024-10-13 11:19:57.731168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.336 [2024-10-13 11:19:57.731218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.336 [2024-10-13 11:19:57.731245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.735303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.735382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.735411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.739288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.739362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.739393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.743282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.743357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.743387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.747271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.747364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.747378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.751294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.751370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.751399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.755389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.755452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.755480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.759422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.759482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.759509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.763456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.763506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.763533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.767528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.767579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.767606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.771511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.771559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.771586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.775535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.775585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.775612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.779555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.779605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.779632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.783567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.783616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.783643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.787529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.787579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.787608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.791592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.791641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.791668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.795571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.795621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.795648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.799528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.799578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.799605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.803544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.803593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.803620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.807602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.807652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.807680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.811826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.811876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.811903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.816299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.816373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.816400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.820573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.820622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.820649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.824617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.824666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.824694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.828665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.828714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.828742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.832776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.832828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.832855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.836765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.836816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.836843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.840764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.840814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.840842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.844888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.844939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.844966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.337 [2024-10-13 11:19:57.848984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.337 [2024-10-13 11:19:57.849034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-10-13 11:19:57.849061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.853105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.853156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.853183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.857223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.857273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.857301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.861269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.861345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.861358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.865299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.865359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.865388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.869275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.869348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.869362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.873269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.873345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.873361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.877297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.877357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.877385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.881328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.881387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.881415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.885421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.885473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.885502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.890032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.890083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.890110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.894304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.894381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.894411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.898622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.898673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.898738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.902850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.902888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.902901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.907136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.907187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.907213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.911366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.911430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.911459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.915607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.915657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.915685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.920039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.920090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.920118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.924508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.924546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.924560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.929027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.929077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.929105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.338 [2024-10-13 11:19:57.933620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.338 [2024-10-13 11:19:57.933657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-10-13 11:19:57.933670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.938007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.938059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.938086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.942673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.942723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.942738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.946956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.946996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.947023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.951245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.951295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.951323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.955647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.955712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.955739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.959844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.959893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.959920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.963955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.964006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.964032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.968024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.968074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.968102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.972146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.972195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.972222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.976124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.976173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.976201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.980235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.980286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.980314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.984304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.984365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.984399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.988263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.988313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.988354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.992363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.992410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.992437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:57.996397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:57.996446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:57.996473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:58.000289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:58.000363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:58.000377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:58.004326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:58.004387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:58.004415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:58.008385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:58.008436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:58.008463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:58.012484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:58.012535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:58.012562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:58.016465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:58.016514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:58.016540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:58.020466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:58.020515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:58.020541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:58.024508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:58.024557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:58.024585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:58.028467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:58.028515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:58.028542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:58.032471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:58.032521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:58.032548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:58.036510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:58.036559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:58.036586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:58.040721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:58.040784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:58.040811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:58.044750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.599 [2024-10-13 11:19:58.044800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.599 [2024-10-13 11:19:58.044827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.599 [2024-10-13 11:19:58.048757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.048807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.048834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.052797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.052847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.052874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.056840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.056890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.056917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.060898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.060947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.060974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.064928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.064977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.065003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.068966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.069002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.069030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.073433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.073469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.073496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.077883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.077933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.077960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.081898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.081948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.081975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.085904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.085955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.085982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.089953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.090003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.090029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.094085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.094134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.094161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.098082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.098132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.098158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.102081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.102116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.102143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.106270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.106304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.106332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.110564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.110601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.110629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.114663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.114737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.114750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.118656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.118713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.118741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.122574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.122608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.122635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.126506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.126540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.126567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.130487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.130520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.130547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.134488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.134537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.134565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.138336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.138404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.138432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.142260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.142309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.142361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.146293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.146370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.146399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.150256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.150306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.150351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.154134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.154184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.154210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.158148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.158182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.158210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.162126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.600 [2024-10-13 11:19:58.162160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.600 [2024-10-13 11:19:58.162187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.600 [2024-10-13 11:19:58.166017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.601 [2024-10-13 11:19:58.166067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.601 [2024-10-13 11:19:58.166094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.601 [2024-10-13 11:19:58.170184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.601 [2024-10-13 11:19:58.170233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.601 [2024-10-13 11:19:58.170260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.601 [2024-10-13 11:19:58.174348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.601 [2024-10-13 11:19:58.174395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.601 [2024-10-13 11:19:58.174422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.601 [2024-10-13 11:19:58.178227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.601 [2024-10-13 11:19:58.178276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.601 [2024-10-13 11:19:58.178303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.601 [2024-10-13 11:19:58.182098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.601 [2024-10-13 11:19:58.182134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.601 [2024-10-13 11:19:58.182160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.601 [2024-10-13 11:19:58.186125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.601 [2024-10-13 11:19:58.186176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.601 [2024-10-13 11:19:58.186203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.601 [2024-10-13 11:19:58.190075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.601 [2024-10-13 11:19:58.190124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.601 [2024-10-13 11:19:58.190151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.601 [2024-10-13 11:19:58.194303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.601 [2024-10-13 11:19:58.194380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.601 [2024-10-13 11:19:58.194409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.861 [2024-10-13 11:19:58.198799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.861 [2024-10-13 11:19:58.198838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.861 [2024-10-13 11:19:58.198851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.861 [2024-10-13 11:19:58.203586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.861 [2024-10-13 11:19:58.203652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.861 [2024-10-13 11:19:58.203697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.861 [2024-10-13 11:19:58.208069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.861 [2024-10-13 11:19:58.208119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.861 [2024-10-13 11:19:58.208146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.861 [2024-10-13 11:19:58.212488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.861 [2024-10-13 11:19:58.212538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.861 [2024-10-13 11:19:58.212566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.861 [2024-10-13 11:19:58.216761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.861 [2024-10-13 11:19:58.216810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.861 [2024-10-13 11:19:58.216837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.220972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.221021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.221048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.225115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.225164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.225191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.229265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.229314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.229353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.233211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.233262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.233289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.237203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.237254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.237281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.241164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.241213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.241241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.245128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.245177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.245204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.249168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.249217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.249245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.253109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.253159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.253186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.257154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.257205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.257246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.261104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.261153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.261181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.265069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.265119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.265146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.269045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.269094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.269121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.273012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.273061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.273089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.276979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.277028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.277055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.280990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.281039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.281066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.285001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.285050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.285078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.289002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.289052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.289078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.293012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.293061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.293089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.297158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.297208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.297235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.301175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.301225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.301253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.305194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.305228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.305255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.309233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.309267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.309294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.313256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.313307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.313347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.317519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.317555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.317567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.322030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.322080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.322109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.326305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.326381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.326395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.330780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.330819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.330833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.335430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.335469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.335483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.339765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.339815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.339843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.344128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.344178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.344205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.348669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.862 [2024-10-13 11:19:58.348736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.862 [2024-10-13 11:19:58.348765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.862 [2024-10-13 11:19:58.353164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.353211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.353240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.357782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.357833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.357861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.362267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.362318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.362376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.366597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.366649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.366677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.370991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.371073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.371085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.375263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.375312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.375368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.379525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.379575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.379603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.383788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.383838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.383865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.387891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.387942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.387971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.391958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.392008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.392036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.396285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.396375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.396407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.400305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.400381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.400409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.404484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.404533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.404560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.408731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.408781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.408809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.412841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.412891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.412919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.417343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.417434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.417448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.422032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.422083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.422111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.426613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.426666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.426680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.430997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.431065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.431077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.435489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.435540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.435568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.439890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.439940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.439967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.444202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.444253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.444280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.448760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.448809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.448835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.453142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.453192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.453219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.863 [2024-10-13 11:19:58.457894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:16.863 [2024-10-13 11:19:58.457948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.863 [2024-10-13 11:19:58.457977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.462311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.462386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.462415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.467108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.467159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.467186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.471269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.471346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.471390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.475293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.475350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.475379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.479377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.479418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.479446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.483790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.483868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.483882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.488415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.488455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.488484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.492924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.492977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.493005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.497416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.497452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.497480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.501905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.501937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.501963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.506256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.506288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.506315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.510574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.510607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.510634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.514764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.514800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.514813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.518879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.518914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.518926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.522759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.522792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.522805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.526601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.526632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.526659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.530492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.530522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.530549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.534434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.534468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.534482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.538320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.538362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.538389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.542408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.542441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.542469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.546342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.125 [2024-10-13 11:19:58.546407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.125 [2024-10-13 11:19:58.546435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.125 [2024-10-13 11:19:58.550502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.550541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.550570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.554421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.554454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.554481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.558426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.558458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.558485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.562412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.562443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.562471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.566309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.566352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.566380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.570503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.570534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.570561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.574427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.574458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.574485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.578330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.578390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.578417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.582328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.582372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.582400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.586294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.586351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.586364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.590528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.590574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.590587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.594906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.594942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.594955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.598877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.598914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.598927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.602870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.602905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.602918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.606759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.606793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.606805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.610636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.610668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.610719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.614721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.614754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.614767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.618548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.618579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.618607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.622531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.622562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.622589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.626393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.626438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.626449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.630597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.630643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.630655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.634417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.634462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.634474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.638245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.638291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.638302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.642139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.642185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.642197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.646234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.646280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.646291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.650138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.650184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.650194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.654122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.654169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.654181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.658421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.658453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.658465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.662290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.662346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.126 [2024-10-13 11:19:58.662359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.126 [2024-10-13 11:19:58.666235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.126 [2024-10-13 11:19:58.666281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.666292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.127 [2024-10-13 11:19:58.670145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.127 [2024-10-13 11:19:58.670190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.670201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.127 [2024-10-13 11:19:58.674105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.127 [2024-10-13 11:19:58.674161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.674173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.127 [2024-10-13 11:19:58.678135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.127 [2024-10-13 11:19:58.678180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.678192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.127 [2024-10-13 11:19:58.682106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.127 [2024-10-13 11:19:58.682151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.682162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.127 [2024-10-13 11:19:58.686070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.127 [2024-10-13 11:19:58.686117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.686129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.127 [2024-10-13 11:19:58.689940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.127 [2024-10-13 11:19:58.689985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.689996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.127 [2024-10-13 11:19:58.693803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.127 [2024-10-13 11:19:58.693848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.693859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.127 [2024-10-13 11:19:58.697749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.127 [2024-10-13 11:19:58.697794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.697806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.127 [2024-10-13 11:19:58.701785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.127 [2024-10-13 11:19:58.701830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.701841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.127 [2024-10-13 11:19:58.705685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.127 [2024-10-13 11:19:58.705729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.705741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.127 [2024-10-13 11:19:58.709510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.127 [2024-10-13 11:19:58.709554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.709565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.127 [2024-10-13 11:19:58.713338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.127 [2024-10-13 11:19:58.713381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.713392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.127 [2024-10-13 11:19:58.717193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.127 [2024-10-13 11:19:58.717239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.717250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.127 [2024-10-13 11:19:58.721523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.127 [2024-10-13 11:19:58.721579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.127 [2024-10-13 11:19:58.721591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.387 [2024-10-13 11:19:58.725692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.387 [2024-10-13 11:19:58.725749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.387 [2024-10-13 11:19:58.725760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.387 [2024-10-13 11:19:58.730000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.387 [2024-10-13 11:19:58.730051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.387 [2024-10-13 11:19:58.730062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.387 [2024-10-13 11:19:58.733896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.387 [2024-10-13 11:19:58.733941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.387 [2024-10-13 11:19:58.733952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.387 [2024-10-13 11:19:58.737794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.387 [2024-10-13 11:19:58.737840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.387 [2024-10-13 11:19:58.737851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.387 [2024-10-13 11:19:58.741605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.387 [2024-10-13 11:19:58.741649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.387 [2024-10-13 11:19:58.741660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.387 [2024-10-13 11:19:58.745579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.387 [2024-10-13 11:19:58.745625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.387 [2024-10-13 11:19:58.745635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.387 [2024-10-13 11:19:58.749461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.387 [2024-10-13 11:19:58.749505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.387 [2024-10-13 11:19:58.749516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.387 [2024-10-13 11:19:58.753302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.387 [2024-10-13 11:19:58.753359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.387 [2024-10-13 11:19:58.753370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.387 [2024-10-13 11:19:58.757187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.387 [2024-10-13 11:19:58.757233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.387 [2024-10-13 11:19:58.757243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.387 [2024-10-13 11:19:58.760993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.387 [2024-10-13 11:19:58.761037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.387 [2024-10-13 11:19:58.761048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.764803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.764847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.764858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.768793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.768840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.768852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.772651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.772697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.772708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.776521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.776567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.776579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.780359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.780403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.780414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.784149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.784194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.784205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.788008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.788054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.788065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.791864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.791910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.791921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.795719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.795764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.795774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.799580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.799625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.799636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.803433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.803477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.803487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.807282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.807327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.807349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.811198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.811247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.811258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.815172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.815221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.815232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.819088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.819151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.819162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.822998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.823063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.823074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.826939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.826972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.826984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.830877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.830915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.830927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.834828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.834858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.834870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.838733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.838765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.838776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.842545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.842588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.842599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.846573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.846619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.846646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.851015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.851075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.851102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.855162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.855207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.855218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.859102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.859147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.859158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.862930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.862961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.862973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.866732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.866763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.866775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.870548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.870612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.870639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.874593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.874639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.874650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.878580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.878625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.878637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.388 [2024-10-13 11:19:58.882437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.388 [2024-10-13 11:19:58.882480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.388 [2024-10-13 11:19:58.882491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.886288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.886344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.886357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.890100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.890145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.890155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.893997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.894042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.894053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.897859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.897903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.897914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.901806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.901851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.901861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.905673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.905717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.905728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.909736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.909781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.909792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.913547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.913591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.913603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.917383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.917427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.917438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.921208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.921253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.921264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.925090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.925135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.925146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.928994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.929040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.929051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.933196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.933242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.933253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.937329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.937401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.937413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.941764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.941810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.941822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.946210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.946256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.946267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.950636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.950691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.950720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.955134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.955179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.955190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.959538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.959585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.959597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.963837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.963881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.963892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.967902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.967947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.967959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.971798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.971843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.971854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.975770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.975815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.975826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.979668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.979713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.979724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.389 [2024-10-13 11:19:58.983945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.389 [2024-10-13 11:19:58.983993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.389 [2024-10-13 11:19:58.984005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.650 [2024-10-13 11:19:58.988190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.650 [2024-10-13 11:19:58.988237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.650 [2024-10-13 11:19:58.988249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.650 [2024-10-13 11:19:58.992516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.650 [2024-10-13 11:19:58.992562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.650 [2024-10-13 11:19:58.992573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.650 [2024-10-13 11:19:58.996409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.650 [2024-10-13 11:19:58.996453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.650 [2024-10-13 11:19:58.996464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.650 [2024-10-13 11:19:59.000246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.650 [2024-10-13 11:19:59.000291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.650 [2024-10-13 11:19:59.000302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.650 [2024-10-13 11:19:59.004200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.650 [2024-10-13 11:19:59.004246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.650 [2024-10-13 11:19:59.004257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.650 [2024-10-13 11:19:59.008320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.650 [2024-10-13 11:19:59.008391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.650 [2024-10-13 11:19:59.008403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.650 [2024-10-13 11:19:59.012235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.650 [2024-10-13 11:19:59.012280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.650 [2024-10-13 11:19:59.012291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.650 [2024-10-13 11:19:59.016135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.650 [2024-10-13 11:19:59.016180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.650 [2024-10-13 11:19:59.016192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.650 [2024-10-13 11:19:59.020019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.650 [2024-10-13 11:19:59.020064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.650 [2024-10-13 11:19:59.020075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.650 [2024-10-13 11:19:59.023951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.650 [2024-10-13 11:19:59.023996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.650 [2024-10-13 11:19:59.024008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.650 [2024-10-13 11:19:59.027753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.650 [2024-10-13 11:19:59.027798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.650 [2024-10-13 11:19:59.027808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.650 [2024-10-13 11:19:59.031642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.031687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.031698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.035463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.035506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.035517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.039440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.039485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.039496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.043297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.043357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.043369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.047171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.047217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.047228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.051088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.051150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.051161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.054986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.055032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.055043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.058912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.058942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.058953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.062734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.062766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.062777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.066483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.066526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.066538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.070315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.070368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.070379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.074150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.074196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.074206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.078056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.078101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.078112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.082025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.082070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.082081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.085938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.085984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.085995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.089826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.089872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.089883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.093707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.093752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.093779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.097594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.097639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.097650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.101451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.101495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.101522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.105266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.105311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.105322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.109200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.109244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.109255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.113078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.113123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.113135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.117103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.117149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.117160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.120978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.121023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.121034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.124896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.124941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.124951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.128789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.128834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.128845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.132691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.132736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.132762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.136574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.136619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.136630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.651 [2024-10-13 11:19:59.140497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.651 [2024-10-13 11:19:59.140541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.651 [2024-10-13 11:19:59.140553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.144227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.144272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.144283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.148177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.148231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.148242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.152137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.152183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.152194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.156538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.156585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.156597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.160895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.160943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.160955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.164847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.164891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.164902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.168711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.168772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.168783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.172643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.172688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.172699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.176515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.176559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.176571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.180310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.180380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.180407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.184256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.184301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.184312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.188135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.188180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.188192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.192034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.192079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.192090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.195995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.196039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.196050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.199895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.199940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.199951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.203871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.203915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.203926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.207724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.207769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.207780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.211621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.211665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.211675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.215521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.215564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.215575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.219424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.219472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.219483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.223358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.223411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.223423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.227456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.227499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.227510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.231329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.231384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.231395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.235222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.235267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.235278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.239115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.239158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.239169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.243056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.243117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.243129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.652 [2024-10-13 11:19:59.247481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.652 [2024-10-13 11:19:59.247525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.652 [2024-10-13 11:19:59.247552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.251579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.251624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.251635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.255814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.255858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.255869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.259883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.259927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.259938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.263803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.263847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.263858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.267752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.267812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.267822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.271690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.271735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.271746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.275505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.275548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.275559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.279374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.279428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.279440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.283230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.283274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.283285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.287140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.287188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.287199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.290981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.291026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.291037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.294791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.294822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.294834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.298610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.298654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.298664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.302472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.302516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.302528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.913 [2024-10-13 11:19:59.306312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.913 [2024-10-13 11:19:59.306367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.913 [2024-10-13 11:19:59.306378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.310178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.310222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.310233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.314054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.314101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.314113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.317928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.317973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.317984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.321798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.321843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.321854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.325828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.325873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.325885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.329853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.329899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.329910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.333803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.333849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.333861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.338028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.338059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.338070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.342390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.342433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.342445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.347068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.347125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.347136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.351488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.351524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.351537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.355665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.355726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.355737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.359875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.359921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.359933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.364016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.364060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.364072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.368044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.368090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.368101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.371917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.371962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.371973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.375726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.375771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.375782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.379573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.379617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.379628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.383424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.383468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.383479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.387346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.387402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.387413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.391212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.391257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.391269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.395088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.395146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.395157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.398933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.398977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.398988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.402869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.402900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.402911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.406631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.406676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.406700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.410425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.410469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.410481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.414272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.414317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.414328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.418073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.418117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.914 [2024-10-13 11:19:59.418128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.914 [2024-10-13 11:19:59.422027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.914 [2024-10-13 11:19:59.422072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.915 [2024-10-13 11:19:59.422083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.915 [2024-10-13 11:19:59.426031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.915 [2024-10-13 11:19:59.426075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.915 [2024-10-13 11:19:59.426086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.915 [2024-10-13 11:19:59.430036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.915 [2024-10-13 11:19:59.430082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.915 [2024-10-13 11:19:59.430093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.915 [2024-10-13 11:19:59.433999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.915 [2024-10-13 11:19:59.434044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.915 [2024-10-13 11:19:59.434055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.915 [2024-10-13 11:19:59.438006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.915 [2024-10-13 11:19:59.438051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.915 [2024-10-13 11:19:59.438062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.915 [2024-10-13 11:19:59.441947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.915 [2024-10-13 11:19:59.441992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.915 [2024-10-13 11:19:59.442003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.915 [2024-10-13 11:19:59.445830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.915 [2024-10-13 11:19:59.445875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.915 [2024-10-13 11:19:59.445887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.915 [2024-10-13 11:19:59.449770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.915 [2024-10-13 11:19:59.449816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.915 [2024-10-13 11:19:59.449828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.915 [2024-10-13 11:19:59.453645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.915 [2024-10-13 11:19:59.453691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.915 [2024-10-13 11:19:59.453702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.915 [2024-10-13 11:19:59.457513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.915 [2024-10-13 11:19:59.457558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.915 [2024-10-13 11:19:59.457569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.915 [2024-10-13 11:19:59.461527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.915 [2024-10-13 11:19:59.461572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.915 [2024-10-13 11:19:59.461583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.915 [2024-10-13 11:19:59.465573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.915 [2024-10-13 11:19:59.465618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.915 [2024-10-13 11:19:59.465629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.915 [2024-10-13 11:19:59.469569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.915 [2024-10-13 11:19:59.469613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.915 [2024-10-13 11:19:59.469624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.915 [2024-10-13 11:19:59.473433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2029940) 00:16:17.915 [2024-10-13 11:19:59.473476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.915 [2024-10-13 11:19:59.473487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.915 00:16:17.915 Latency(us) 00:16:17.915 [2024-10-13T11:19:59.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.915 [2024-10-13T11:19:59.517Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:17.915 nvme0n1 : 2.00 7649.33 956.17 0.00 0.00 2088.53 1683.08 4885.41 00:16:17.915 [2024-10-13T11:19:59.517Z] =================================================================================================================== 00:16:17.915 [2024-10-13T11:19:59.517Z] Total : 7649.33 956.17 0.00 0.00 2088.53 1683.08 4885.41 00:16:17.915 0 00:16:17.915 11:19:59 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:17.915 11:19:59 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:17.915 11:19:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:17.915 11:19:59 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:17.915 | .driver_specific 00:16:17.915 | .nvme_error 00:16:17.915 | .status_code 00:16:17.915 | .command_transient_transport_error' 00:16:18.174 11:19:59 -- host/digest.sh@71 -- # (( 494 > 0 )) 00:16:18.174 11:19:59 -- host/digest.sh@73 -- # killprocess 71585 00:16:18.174 11:19:59 -- common/autotest_common.sh@926 -- # '[' -z 71585 ']' 00:16:18.174 11:19:59 -- common/autotest_common.sh@930 -- # kill -0 71585 00:16:18.174 11:19:59 -- common/autotest_common.sh@931 -- # uname 00:16:18.174 11:19:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:18.174 11:19:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71585 00:16:18.434 11:19:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:18.434 killing process with pid 71585 00:16:18.434 11:19:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:18.434 11:19:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71585' 00:16:18.434 11:19:59 -- common/autotest_common.sh@945 -- # kill 71585 00:16:18.434 Received shutdown signal, test time was about 2.000000 seconds 00:16:18.434 00:16:18.434 Latency(us) 00:16:18.434 [2024-10-13T11:20:00.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.434 [2024-10-13T11:20:00.036Z] =================================================================================================================== 00:16:18.434 [2024-10-13T11:20:00.036Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:18.434 11:19:59 -- common/autotest_common.sh@950 -- # wait 71585 00:16:18.434 11:19:59 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:16:18.434 11:19:59 -- host/digest.sh@54 -- # local rw bs qd 00:16:18.434 11:19:59 -- host/digest.sh@56 -- # rw=randwrite 00:16:18.434 11:19:59 -- host/digest.sh@56 -- # bs=4096 00:16:18.434 11:19:59 -- host/digest.sh@56 -- # qd=128 00:16:18.434 11:19:59 -- host/digest.sh@58 -- # bperfpid=71645 00:16:18.434 11:19:59 -- host/digest.sh@60 -- # waitforlisten 71645 /var/tmp/bperf.sock 00:16:18.434 11:19:59 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:18.434 11:19:59 -- common/autotest_common.sh@819 -- # '[' -z 71645 ']' 00:16:18.434 11:19:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:18.434 11:19:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:18.434 11:19:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:18.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:18.434 11:19:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:18.434 11:19:59 -- common/autotest_common.sh@10 -- # set +x 00:16:18.693 [2024-10-13 11:20:00.036312] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:18.693 [2024-10-13 11:20:00.036478] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71645 ] 00:16:18.693 [2024-10-13 11:20:00.178481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.693 [2024-10-13 11:20:00.246178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.630 11:20:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:19.630 11:20:01 -- common/autotest_common.sh@852 -- # return 0 00:16:19.630 11:20:01 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:19.630 11:20:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:19.888 11:20:01 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:19.888 11:20:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.888 11:20:01 -- common/autotest_common.sh@10 -- # set +x 00:16:19.888 11:20:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.888 11:20:01 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:19.888 11:20:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:20.164 nvme0n1 00:16:20.164 11:20:01 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:20.164 11:20:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:20.164 11:20:01 -- common/autotest_common.sh@10 -- # set +x 00:16:20.164 11:20:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:20.164 11:20:01 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:20.164 11:20:01 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:20.164 Running I/O for 2 seconds... 00:16:20.164 [2024-10-13 11:20:01.708905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ddc00 00:16:20.164 [2024-10-13 11:20:01.710196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.164 [2024-10-13 11:20:01.710238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.164 [2024-10-13 11:20:01.724038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fef90 00:16:20.164 [2024-10-13 11:20:01.725401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.164 [2024-10-13 11:20:01.725439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.164 [2024-10-13 11:20:01.738260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ff3c8 00:16:20.164 [2024-10-13 11:20:01.739590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.164 [2024-10-13 11:20:01.739635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:20.437 [2024-10-13 11:20:01.754889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190feb58 00:16:20.437 [2024-10-13 11:20:01.756257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.437 [2024-10-13 11:20:01.756291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:20.437 [2024-10-13 11:20:01.771330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fe720 00:16:20.437 [2024-10-13 11:20:01.772674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.437 [2024-10-13 11:20:01.772723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:20.437 [2024-10-13 11:20:01.787288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fe2e8 00:16:20.437 [2024-10-13 11:20:01.788612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.437 [2024-10-13 11:20:01.788656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:20.437 [2024-10-13 11:20:01.801714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fdeb0 00:16:20.437 [2024-10-13 11:20:01.803088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.437 [2024-10-13 11:20:01.803146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:20.437 [2024-10-13 11:20:01.816223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fda78 00:16:20.437 [2024-10-13 11:20:01.817525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.437 [2024-10-13 11:20:01.817553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:20.437 [2024-10-13 11:20:01.830574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fd640 00:16:20.437 [2024-10-13 11:20:01.831878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.437 [2024-10-13 11:20:01.831922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:20.438 [2024-10-13 11:20:01.844989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fd208 00:16:20.438 [2024-10-13 11:20:01.846212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.438 [2024-10-13 11:20:01.846255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:20.438 [2024-10-13 11:20:01.860772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fcdd0 00:16:20.438 [2024-10-13 11:20:01.861986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.438 [2024-10-13 11:20:01.862029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:20.438 [2024-10-13 11:20:01.875949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fc998 00:16:20.438 [2024-10-13 11:20:01.877237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.438 [2024-10-13 11:20:01.877282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:20.438 [2024-10-13 11:20:01.892293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fc560 00:16:20.438 [2024-10-13 11:20:01.893679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.438 [2024-10-13 11:20:01.893754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:20.438 [2024-10-13 11:20:01.908411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fc128 00:16:20.438 [2024-10-13 11:20:01.909789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.438 [2024-10-13 11:20:01.909833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:20.438 [2024-10-13 11:20:01.923640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fbcf0 00:16:20.438 [2024-10-13 11:20:01.924905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.438 [2024-10-13 11:20:01.924948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:20.438 [2024-10-13 11:20:01.938538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fb8b8 00:16:20.438 [2024-10-13 11:20:01.939834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.438 [2024-10-13 11:20:01.939878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:20.438 [2024-10-13 11:20:01.953439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fb480 00:16:20.438 [2024-10-13 11:20:01.954636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.438 [2024-10-13 11:20:01.954680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:20.438 [2024-10-13 11:20:01.968489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fb048 00:16:20.438 [2024-10-13 11:20:01.969699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.438 [2024-10-13 11:20:01.969741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:20.438 [2024-10-13 11:20:01.983365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fac10 00:16:20.438 [2024-10-13 11:20:01.984617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.438 [2024-10-13 11:20:01.984648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:20.438 [2024-10-13 11:20:01.999829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fa7d8 00:16:20.438 [2024-10-13 11:20:02.001048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.438 [2024-10-13 11:20:02.001095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:20.438 [2024-10-13 11:20:02.016593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190fa3a0 00:16:20.438 [2024-10-13 11:20:02.017834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.438 [2024-10-13 11:20:02.017895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:20.438 [2024-10-13 11:20:02.032940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f9f68 00:16:20.438 [2024-10-13 11:20:02.034168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.438 [2024-10-13 11:20:02.034213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.048774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f9b30 00:16:20.698 [2024-10-13 11:20:02.049881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.049925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.064830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f96f8 00:16:20.698 [2024-10-13 11:20:02.066012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.066057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.080303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f92c0 00:16:20.698 [2024-10-13 11:20:02.081415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.081467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.095241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f8e88 00:16:20.698 [2024-10-13 11:20:02.096318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.096369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.110120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f8a50 00:16:20.698 [2024-10-13 11:20:02.111246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.111291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.125601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f8618 00:16:20.698 [2024-10-13 11:20:02.126712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.126745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.140632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f81e0 00:16:20.698 [2024-10-13 11:20:02.141727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.141787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.155680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f7da8 00:16:20.698 [2024-10-13 11:20:02.156723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.156766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.170952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f7970 00:16:20.698 [2024-10-13 11:20:02.172079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.172122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.185186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f7538 00:16:20.698 [2024-10-13 11:20:02.186198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.186243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.199580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f7100 00:16:20.698 [2024-10-13 11:20:02.200619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.200663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.214019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f6cc8 00:16:20.698 [2024-10-13 11:20:02.215160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.215203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.228529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f6890 00:16:20.698 [2024-10-13 11:20:02.229565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.229622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.243144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f6458 00:16:20.698 [2024-10-13 11:20:02.244116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.244161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.257669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f6020 00:16:20.698 [2024-10-13 11:20:02.258715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.258745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.273103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f5be8 00:16:20.698 [2024-10-13 11:20:02.274078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.274108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:20.698 [2024-10-13 11:20:02.289579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f57b0 00:16:20.698 [2024-10-13 11:20:02.290564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.698 [2024-10-13 11:20:02.290609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.305122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f5378 00:16:20.958 [2024-10-13 11:20:02.306135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.306182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.319529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f4f40 00:16:20.958 [2024-10-13 11:20:02.320521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.320567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.333800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f4b08 00:16:20.958 [2024-10-13 11:20:02.334768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.334800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.348097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f46d0 00:16:20.958 [2024-10-13 11:20:02.349077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.349136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.362445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f4298 00:16:20.958 [2024-10-13 11:20:02.363419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.363481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.377885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f3e60 00:16:20.958 [2024-10-13 11:20:02.378892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.378927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.393215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f3a28 00:16:20.958 [2024-10-13 11:20:02.394207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.394253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.409707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f35f0 00:16:20.958 [2024-10-13 11:20:02.410620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.410668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.425208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f31b8 00:16:20.958 [2024-10-13 11:20:02.426089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.426163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.439636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f2d80 00:16:20.958 [2024-10-13 11:20:02.440515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.440559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.453835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f2948 00:16:20.958 [2024-10-13 11:20:02.454767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.454798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.468217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f2510 00:16:20.958 [2024-10-13 11:20:02.469029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.469087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.484083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f20d8 00:16:20.958 [2024-10-13 11:20:02.484970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.485002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.499189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f1ca0 00:16:20.958 [2024-10-13 11:20:02.499983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.500014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.513609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f1868 00:16:20.958 [2024-10-13 11:20:02.514438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.514485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.528000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f1430 00:16:20.958 [2024-10-13 11:20:02.528816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.528847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:20.958 [2024-10-13 11:20:02.542222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f0ff8 00:16:20.958 [2024-10-13 11:20:02.543050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.958 [2024-10-13 11:20:02.543095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.557181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f0bc0 00:16:21.218 [2024-10-13 11:20:02.558074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.558118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.571996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f0788 00:16:21.218 [2024-10-13 11:20:02.572810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.572870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.586401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190f0350 00:16:21.218 [2024-10-13 11:20:02.587204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.587263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.600945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190eff18 00:16:21.218 [2024-10-13 11:20:02.601681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.601712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.615366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190efae0 00:16:21.218 [2024-10-13 11:20:02.616111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.616142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.629715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ef6a8 00:16:21.218 [2024-10-13 11:20:02.630564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.630609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.644993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ef270 00:16:21.218 [2024-10-13 11:20:02.645694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.645724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.659593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190eee38 00:16:21.218 [2024-10-13 11:20:02.660295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.660337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.673830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190eea00 00:16:21.218 [2024-10-13 11:20:02.674514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.674545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.688113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ee5c8 00:16:21.218 [2024-10-13 11:20:02.688821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.688853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.702342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ee190 00:16:21.218 [2024-10-13 11:20:02.703037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.703067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.716872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190edd58 00:16:21.218 [2024-10-13 11:20:02.717547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.717567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.733027] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ed920 00:16:21.218 [2024-10-13 11:20:02.733776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.733807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.747507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ed4e8 00:16:21.218 [2024-10-13 11:20:02.748152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.748182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.761689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ed0b0 00:16:21.218 [2024-10-13 11:20:02.762304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.762344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.776084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ecc78 00:16:21.218 [2024-10-13 11:20:02.776706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.776737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.790149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ec840 00:16:21.218 [2024-10-13 11:20:02.790826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.218 [2024-10-13 11:20:02.790858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:21.218 [2024-10-13 11:20:02.804217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ec408 00:16:21.218 [2024-10-13 11:20:02.804819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.219 [2024-10-13 11:20:02.804850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:02.819075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ebfd0 00:16:21.478 [2024-10-13 11:20:02.819740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:02.819787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:02.833415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ebb98 00:16:21.478 [2024-10-13 11:20:02.833981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:02.834014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:02.847494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190eb760 00:16:21.478 [2024-10-13 11:20:02.848080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:02.848110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:02.861695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190eb328 00:16:21.478 [2024-10-13 11:20:02.862249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:02.862279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:02.876068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190eaef0 00:16:21.478 [2024-10-13 11:20:02.876614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:02.876643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:02.890298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190eaab8 00:16:21.478 [2024-10-13 11:20:02.890879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:02.890911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:02.905147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ea680 00:16:21.478 [2024-10-13 11:20:02.905628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:02.905683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:02.919321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190ea248 00:16:21.478 [2024-10-13 11:20:02.919854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:02.919884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:02.933337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e9e10 00:16:21.478 [2024-10-13 11:20:02.933843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:02.933874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:02.947380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e99d8 00:16:21.478 [2024-10-13 11:20:02.947909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:02.947940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:02.961541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e95a0 00:16:21.478 [2024-10-13 11:20:02.961979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:02.962003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:02.976224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e9168 00:16:21.478 [2024-10-13 11:20:02.976810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:02.976843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:02.992566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e8d30 00:16:21.478 [2024-10-13 11:20:02.993104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:02.993134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:03.008944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e88f8 00:16:21.478 [2024-10-13 11:20:03.009406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:03.009443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:03.025628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e84c0 00:16:21.478 [2024-10-13 11:20:03.026149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:03.026192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:03.042284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e8088 00:16:21.478 [2024-10-13 11:20:03.042776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:03.042802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:03.058323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e7c50 00:16:21.478 [2024-10-13 11:20:03.058806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:03.058837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:21.478 [2024-10-13 11:20:03.074743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e7818 00:16:21.478 [2024-10-13 11:20:03.075220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.478 [2024-10-13 11:20:03.075250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.090228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e73e0 00:16:21.738 [2024-10-13 11:20:03.090711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.090754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.104989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e6fa8 00:16:21.738 [2024-10-13 11:20:03.105385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.105443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.119703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e6b70 00:16:21.738 [2024-10-13 11:20:03.120080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.120120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.134443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e6738 00:16:21.738 [2024-10-13 11:20:03.134854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.134879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.149444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e6300 00:16:21.738 [2024-10-13 11:20:03.149862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.149889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.165076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e5ec8 00:16:21.738 [2024-10-13 11:20:03.165467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.165494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.181823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e5a90 00:16:21.738 [2024-10-13 11:20:03.182161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.182186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.197821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e5658 00:16:21.738 [2024-10-13 11:20:03.198147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.198172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.213193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e5220 00:16:21.738 [2024-10-13 11:20:03.213539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.213565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.228788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e4de8 00:16:21.738 [2024-10-13 11:20:03.229110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.229137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.244215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e49b0 00:16:21.738 [2024-10-13 11:20:03.244573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.244601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.259223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e4578 00:16:21.738 [2024-10-13 11:20:03.259577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.259606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.274310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e4140 00:16:21.738 [2024-10-13 11:20:03.274641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.274668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.289135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e3d08 00:16:21.738 [2024-10-13 11:20:03.289408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.289433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.303819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e38d0 00:16:21.738 [2024-10-13 11:20:03.304076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.304100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.318656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e3498 00:16:21.738 [2024-10-13 11:20:03.318937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.318963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:21.738 [2024-10-13 11:20:03.333855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e3060 00:16:21.738 [2024-10-13 11:20:03.334094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.738 [2024-10-13 11:20:03.334118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:21.998 [2024-10-13 11:20:03.349095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e2c28 00:16:21.998 [2024-10-13 11:20:03.349319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.998 [2024-10-13 11:20:03.349339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:21.998 [2024-10-13 11:20:03.363248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e27f0 00:16:21.998 [2024-10-13 11:20:03.363471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.998 [2024-10-13 11:20:03.363491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:21.998 [2024-10-13 11:20:03.377305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e23b8 00:16:21.998 [2024-10-13 11:20:03.377523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.998 [2024-10-13 11:20:03.377542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:21.998 [2024-10-13 11:20:03.391367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e1f80 00:16:21.998 [2024-10-13 11:20:03.391570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.998 [2024-10-13 11:20:03.391590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:21.998 [2024-10-13 11:20:03.405586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e1b48 00:16:21.998 [2024-10-13 11:20:03.405820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.998 [2024-10-13 11:20:03.405844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:21.998 [2024-10-13 11:20:03.422397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e1710 00:16:21.998 [2024-10-13 11:20:03.422591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.998 [2024-10-13 11:20:03.422613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:21.998 [2024-10-13 11:20:03.438226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e12d8 00:16:21.998 [2024-10-13 11:20:03.438448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.998 [2024-10-13 11:20:03.438470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:21.998 [2024-10-13 11:20:03.452910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e0ea0 00:16:21.998 [2024-10-13 11:20:03.453079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.998 [2024-10-13 11:20:03.453101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:21.999 [2024-10-13 11:20:03.466641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e0a68 00:16:21.999 [2024-10-13 11:20:03.466801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.999 [2024-10-13 11:20:03.466822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:21.999 [2024-10-13 11:20:03.480259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e0630 00:16:21.999 [2024-10-13 11:20:03.480432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.999 [2024-10-13 11:20:03.480453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:21.999 [2024-10-13 11:20:03.493931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190e01f8 00:16:21.999 [2024-10-13 11:20:03.494065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.999 [2024-10-13 11:20:03.494084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:21.999 [2024-10-13 11:20:03.507529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190dfdc0 00:16:21.999 [2024-10-13 11:20:03.507653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.999 [2024-10-13 11:20:03.507672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:21.999 [2024-10-13 11:20:03.521440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190df988 00:16:21.999 [2024-10-13 11:20:03.521558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.999 [2024-10-13 11:20:03.521577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:21.999 [2024-10-13 11:20:03.535924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190df550 00:16:21.999 [2024-10-13 11:20:03.536035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.999 [2024-10-13 11:20:03.536055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:21.999 [2024-10-13 11:20:03.550417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190df118 00:16:21.999 [2024-10-13 11:20:03.550541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.999 [2024-10-13 11:20:03.550563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:21.999 [2024-10-13 11:20:03.564424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190dece0 00:16:21.999 [2024-10-13 11:20:03.564520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.999 [2024-10-13 11:20:03.564540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:21.999 [2024-10-13 11:20:03.578414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190de8a8 00:16:21.999 [2024-10-13 11:20:03.578499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.999 [2024-10-13 11:20:03.578519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:21.999 [2024-10-13 11:20:03.592788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190de038 00:16:21.999 [2024-10-13 11:20:03.592878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.999 [2024-10-13 11:20:03.592898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:22.258 [2024-10-13 11:20:03.613932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190de038 00:16:22.258 [2024-10-13 11:20:03.615323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.258 [2024-10-13 11:20:03.615395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:22.258 [2024-10-13 11:20:03.628476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190de470 00:16:22.258 [2024-10-13 11:20:03.629791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.258 [2024-10-13 11:20:03.629833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.258 [2024-10-13 11:20:03.642622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190de8a8 00:16:22.258 [2024-10-13 11:20:03.643953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.258 [2024-10-13 11:20:03.643996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:22.258 [2024-10-13 11:20:03.657040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190dece0 00:16:22.258 [2024-10-13 11:20:03.658318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.258 [2024-10-13 11:20:03.658407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:22.258 [2024-10-13 11:20:03.672336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190df118 00:16:22.258 [2024-10-13 11:20:03.673746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.258 [2024-10-13 11:20:03.673788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:22.258 [2024-10-13 11:20:03.686693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2385dc0) with pdu=0x2000190df550 00:16:22.258 [2024-10-13 11:20:03.688057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.258 [2024-10-13 11:20:03.688100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:22.258 00:16:22.259 Latency(us) 00:16:22.259 [2024-10-13T11:20:03.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.259 [2024-10-13T11:20:03.861Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:22.259 nvme0n1 : 2.00 16979.05 66.32 0.00 0.00 7532.20 6523.81 20494.89 00:16:22.259 [2024-10-13T11:20:03.861Z] =================================================================================================================== 00:16:22.259 [2024-10-13T11:20:03.861Z] Total : 16979.05 66.32 0.00 0.00 7532.20 6523.81 20494.89 00:16:22.259 0 00:16:22.259 11:20:03 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:22.259 11:20:03 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:22.259 11:20:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:22.259 11:20:03 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:22.259 | .driver_specific 00:16:22.259 | .nvme_error 00:16:22.259 | .status_code 00:16:22.259 | .command_transient_transport_error' 00:16:22.518 11:20:03 -- host/digest.sh@71 -- # (( 133 > 0 )) 00:16:22.518 11:20:03 -- host/digest.sh@73 -- # killprocess 71645 00:16:22.518 11:20:03 -- common/autotest_common.sh@926 -- # '[' -z 71645 ']' 00:16:22.518 11:20:03 -- common/autotest_common.sh@930 -- # kill -0 71645 00:16:22.518 11:20:03 -- common/autotest_common.sh@931 -- # uname 00:16:22.518 11:20:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:22.518 11:20:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71645 00:16:22.518 11:20:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:22.518 11:20:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:22.518 killing process with pid 71645 00:16:22.518 11:20:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71645' 00:16:22.518 Received shutdown signal, test time was about 2.000000 seconds 00:16:22.518 00:16:22.518 Latency(us) 00:16:22.518 [2024-10-13T11:20:04.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.518 [2024-10-13T11:20:04.120Z] =================================================================================================================== 00:16:22.518 [2024-10-13T11:20:04.120Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:22.518 11:20:04 -- common/autotest_common.sh@945 -- # kill 71645 00:16:22.518 11:20:04 -- common/autotest_common.sh@950 -- # wait 71645 00:16:22.777 11:20:04 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:16:22.777 11:20:04 -- host/digest.sh@54 -- # local rw bs qd 00:16:22.777 11:20:04 -- host/digest.sh@56 -- # rw=randwrite 00:16:22.777 11:20:04 -- host/digest.sh@56 -- # bs=131072 00:16:22.777 11:20:04 -- host/digest.sh@56 -- # qd=16 00:16:22.777 11:20:04 -- host/digest.sh@58 -- # bperfpid=71700 00:16:22.777 11:20:04 -- host/digest.sh@60 -- # waitforlisten 71700 /var/tmp/bperf.sock 00:16:22.777 11:20:04 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:22.777 11:20:04 -- common/autotest_common.sh@819 -- # '[' -z 71700 ']' 00:16:22.777 11:20:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:22.777 11:20:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:22.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:22.777 11:20:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:22.777 11:20:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:22.777 11:20:04 -- common/autotest_common.sh@10 -- # set +x 00:16:22.777 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:22.777 Zero copy mechanism will not be used. 00:16:22.777 [2024-10-13 11:20:04.241216] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:22.777 [2024-10-13 11:20:04.241381] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71700 ] 00:16:23.037 [2024-10-13 11:20:04.390905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.037 [2024-10-13 11:20:04.446822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.605 11:20:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:23.605 11:20:05 -- common/autotest_common.sh@852 -- # return 0 00:16:23.605 11:20:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:23.605 11:20:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:23.864 11:20:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:23.864 11:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.864 11:20:05 -- common/autotest_common.sh@10 -- # set +x 00:16:23.864 11:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.864 11:20:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:23.864 11:20:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:24.123 nvme0n1 00:16:24.123 11:20:05 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:24.123 11:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.123 11:20:05 -- common/autotest_common.sh@10 -- # set +x 00:16:24.123 11:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.123 11:20:05 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:24.123 11:20:05 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:24.383 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:24.383 Zero copy mechanism will not be used. 00:16:24.383 Running I/O for 2 seconds... 00:16:24.383 [2024-10-13 11:20:05.822575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.822944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.822975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.827396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.827754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.827809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.832517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.832865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.832895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.837290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.837611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.837639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.842160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.842514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.842544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.847390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.847733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.847794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.852648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.852987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.853016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.857825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.858103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.858130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.862907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.863220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.863247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.867817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.868095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.868122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.872724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.873021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.873048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.877420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.877717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.877744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.882154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.882465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.882492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.886870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.887167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.887193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.891663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.891952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.891980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.896411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.896689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.896715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.901160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.383 [2024-10-13 11:20:05.901478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.383 [2024-10-13 11:20:05.901506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.383 [2024-10-13 11:20:05.905903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.906177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.906203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.910729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.911025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.911067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.915457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.915759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.915785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.920133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.920437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.920464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.924889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.925170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.925196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.929593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.929888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.929915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.934239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.934530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.934557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.938930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.939237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.939263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.943865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.944141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.944167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.948586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.948864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.948891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.953252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.953562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.953590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.957964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.958244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.958270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.962724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.963063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.963090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.967706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.967998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.968026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.972530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.972821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.972865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.384 [2024-10-13 11:20:05.977380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.384 [2024-10-13 11:20:05.977710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.384 [2024-10-13 11:20:05.977738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.643 [2024-10-13 11:20:05.982606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.643 [2024-10-13 11:20:05.982946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.643 [2024-10-13 11:20:05.982984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:05.987771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:05.988049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:05.988077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:05.992502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:05.992781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:05.992808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:05.997374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:05.997671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:05.997698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.002766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.003113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.003156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.007659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.007960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.007987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.012392] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.012669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.012696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.017105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.017441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.017469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.021832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.022117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.022144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.026439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.026767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.026797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.031182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.031525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.031552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.035944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.036229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.036258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.040851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.041127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.041154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.045600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.045885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.045912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.050528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.050881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.050910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.055899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.056183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.056210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.061078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.061391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.061431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.066084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.066408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.066454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.071282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.071666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.071726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.076654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.076967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.076995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.081839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.082123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.082150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.087130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.087462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.087492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.092241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.092592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.092622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.097293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.097611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.097638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.102207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.102583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.102613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.107276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.107626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.107653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.112055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.112333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.112388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.116801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.644 [2024-10-13 11:20:06.117078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.644 [2024-10-13 11:20:06.117104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.644 [2024-10-13 11:20:06.121587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.121869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.121896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.126172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.126497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.126525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.130851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.131186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.131212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.135628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.135934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.135961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.140280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.140571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.140598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.144914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.145190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.145216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.149645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.149926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.149952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.154287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.154584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.154613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.158959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.159285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.159312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.163753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.164032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.164058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.168473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.168781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.168807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.173139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.173466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.173494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.177805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.178081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.178108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.182496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.182833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.182862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.187283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.187619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.187646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.192024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.192298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.192333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.196670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.196964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.197007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.201378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.201655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.201682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.206076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.206390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.206417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.210886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.211242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.211269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.215643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.215939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.215965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.220489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.220768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.220794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.225129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.225416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.225442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.229878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.230152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.230178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.234667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.235007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.235067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.645 [2024-10-13 11:20:06.239819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.645 [2024-10-13 11:20:06.240135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.645 [2024-10-13 11:20:06.240164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.244875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.245148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.245174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.249932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.250207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.250233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.254683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.255028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.255072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.260185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.260576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.260606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.265498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.265862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.265890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.270870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.271217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.271244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.276100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.276440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.276470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.281436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.281796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.281822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.286517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.286831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.286860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.291895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.292183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.292210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.297051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.297337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.297408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.302360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.302706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.302746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.307605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.307961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.307987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.312657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.312984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.313010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.317915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.318192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.318219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.323269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.323636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.323666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.328495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.328855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.328880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.333837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.334121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.334149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.338992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.339321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.339392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.344189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.344549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.344578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.349244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.349580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.349609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.354156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.354490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.354518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.359129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.359444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.906 [2024-10-13 11:20:06.359485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.906 [2024-10-13 11:20:06.364120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.906 [2024-10-13 11:20:06.364427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.364454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.368899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.369179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.369205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.373679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.373979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.374007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.378378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.378654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.378681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.383157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.383498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.383527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.387848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.388125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.388151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.392529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.392823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.392849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.397231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.397521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.397548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.401906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.402181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.402207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.406761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.407097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.407139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.411580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.411874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.411900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.416183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.416506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.416534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.420921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.421194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.421220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.425788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.426071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.426098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.430403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.430679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.430730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.435191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.435521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.435549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.439950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.440232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.440259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.444674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.444965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.444991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.449347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.449626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.449653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.453947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.454230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.454256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.458697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.459037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.459079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.463478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.463778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.463806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.468197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.468536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.468564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.473159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.473487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.473515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.478257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.478615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.478645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.483471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.483805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.483832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.488871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.489166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.489194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.494010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.494290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.494316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.907 [2024-10-13 11:20:06.499175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:24.907 [2024-10-13 11:20:06.499544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.907 [2024-10-13 11:20:06.499573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.168 [2024-10-13 11:20:06.504695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.168 [2024-10-13 11:20:06.505035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.168 [2024-10-13 11:20:06.505062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.168 [2024-10-13 11:20:06.510135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.168 [2024-10-13 11:20:06.510540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.168 [2024-10-13 11:20:06.510571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.168 [2024-10-13 11:20:06.515646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.168 [2024-10-13 11:20:06.515978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.168 [2024-10-13 11:20:06.516021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.168 [2024-10-13 11:20:06.521106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.168 [2024-10-13 11:20:06.521448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.168 [2024-10-13 11:20:06.521476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.168 [2024-10-13 11:20:06.526263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.168 [2024-10-13 11:20:06.526608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.168 [2024-10-13 11:20:06.526638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.168 [2024-10-13 11:20:06.531455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.168 [2024-10-13 11:20:06.531799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.168 [2024-10-13 11:20:06.531827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.168 [2024-10-13 11:20:06.536520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.168 [2024-10-13 11:20:06.536861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.168 [2024-10-13 11:20:06.536889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.168 [2024-10-13 11:20:06.541587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.168 [2024-10-13 11:20:06.541906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.168 [2024-10-13 11:20:06.541933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.168 [2024-10-13 11:20:06.546829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.168 [2024-10-13 11:20:06.547180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.168 [2024-10-13 11:20:06.547207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.551879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.552163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.552190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.556760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.557045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.557072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.561806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.562091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.562117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.566553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.566907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.566938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.571577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.571898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.571926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.576453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.576736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.576764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.581256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.581565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.581588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.586036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.586327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.586376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.590781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.591127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.591155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.595683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.595965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.595993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.600476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.600753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.600780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.605496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.605853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.605883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.610632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.611013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.611055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.615709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.615994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.616022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.620560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.620863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.620890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.625490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.625781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.625808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.630479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.630794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.630823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.635503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.635826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.635853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.640610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.640910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.640937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.645523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.645827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.645854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.650321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.650681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.650719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.655548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.655880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.655907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.660908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.661217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.661244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.666186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.666523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.666552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.671384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.671735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.671777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.676658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.676975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.677003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.681860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.682145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.682172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.686783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.687131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.687158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.691931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.169 [2024-10-13 11:20:06.692225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.169 [2024-10-13 11:20:06.692252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.169 [2024-10-13 11:20:06.696965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.697253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.697281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.170 [2024-10-13 11:20:06.701762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.702043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.702069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.170 [2024-10-13 11:20:06.706876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.707217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.707244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.170 [2024-10-13 11:20:06.711846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.712127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.712153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.170 [2024-10-13 11:20:06.716672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.716961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.716989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.170 [2024-10-13 11:20:06.721602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.721910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.721937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.170 [2024-10-13 11:20:06.726456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.726794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.726823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.170 [2024-10-13 11:20:06.731241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.731562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.731589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.170 [2024-10-13 11:20:06.736099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.736423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.736451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.170 [2024-10-13 11:20:06.740910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.741209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.741236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.170 [2024-10-13 11:20:06.745724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.746008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.746035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.170 [2024-10-13 11:20:06.750650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.750996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.751038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.170 [2024-10-13 11:20:06.755524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.755808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.755835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.170 [2024-10-13 11:20:06.760228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.760570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.760597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.170 [2024-10-13 11:20:06.765529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.170 [2024-10-13 11:20:06.765824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.170 [2024-10-13 11:20:06.765866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.770447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.770768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.770797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.775577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.775868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.775896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.780825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.781110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.781138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.785584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.785868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.785895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.790302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.790596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.790623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.795294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.795604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.795631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.800071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.800395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.800424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.804959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.805253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.805280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.809796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.810081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.810108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.814758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.815092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.815134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.819649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.819931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.819958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.824618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.824946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.824967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.829587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.829920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.829948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.834557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.834913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.834942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.839379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.839664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.839691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.844199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.844559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.844588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.848971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.849250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.849276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.853724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.854019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.854045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.858441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.858756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.430 [2024-10-13 11:20:06.858785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.430 [2024-10-13 11:20:06.863267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.430 [2024-10-13 11:20:06.863555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.863582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.867939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.868214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.868241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.872843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.873119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.873145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.877621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.877896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.877923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.882404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.882717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.882762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.887353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.887652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.887679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.892131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.892460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.892488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.896989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.897277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.897304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.901879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.902184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.902211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.906995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.907320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.907373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.912179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.912539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.912569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.917239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.917582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.917610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.922267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.922601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.922628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.927305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.927637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.927664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.932041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.932318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.932369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.936853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.937129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.937156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.941587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.941861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.941888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.946255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.946567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.946594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.951319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.951621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.951647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.955940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.956214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.956241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.960828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.961110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.961136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.965611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.965895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.965923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.970370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.970666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.970718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.975200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.975540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.975568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.979998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.980284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.980310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.984871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.985156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.985183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.989665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.989950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.989978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.994402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.994734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.994762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:06.999114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.431 [2024-10-13 11:20:06.999430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.431 [2024-10-13 11:20:06.999457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.431 [2024-10-13 11:20:07.003934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.432 [2024-10-13 11:20:07.004212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.432 [2024-10-13 11:20:07.004238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.432 [2024-10-13 11:20:07.008808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.432 [2024-10-13 11:20:07.009093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.432 [2024-10-13 11:20:07.009120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.432 [2024-10-13 11:20:07.013599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.432 [2024-10-13 11:20:07.013882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.432 [2024-10-13 11:20:07.013910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.432 [2024-10-13 11:20:07.018250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.432 [2024-10-13 11:20:07.018558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.432 [2024-10-13 11:20:07.018586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.432 [2024-10-13 11:20:07.023069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.432 [2024-10-13 11:20:07.023362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.432 [2024-10-13 11:20:07.023397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.028440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.028810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.028837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.033565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.033889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.033932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.039067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.039360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.039396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.043786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.044063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.044089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.048621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.048924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.048951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.053444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.053731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.053758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.058233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.058553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.058580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.062900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.063249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.063275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.067765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.068042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.068068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.072480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.072776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.072802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.077225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.077535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.077563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.081957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.082231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.082258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.086823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.087161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.087189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.091851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.092136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.092163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.097088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.097416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.692 [2024-10-13 11:20:07.097458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.692 [2024-10-13 11:20:07.102321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.692 [2024-10-13 11:20:07.102672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.102710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.107519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.107867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.107893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.112552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.112893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.112919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.117617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.117947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.117973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.122488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.122816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.122844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.127313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.127615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.127641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.132007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.132286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.132313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.136910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.137178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.137204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.141581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.141912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.141941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.146370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.146647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.146673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.151059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.151352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.151388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.155975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.156273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.156301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.161269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.161623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.161651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.166319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.166638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.166666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.171225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.171553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.171580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.176113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.176439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.176467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.180902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.181176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.181202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.185645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.185936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.185962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.190299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.190615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.190641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.194990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.195305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.195341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.199786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.200065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.200091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.204607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.204901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.204928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.209266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.209578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.209606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.213988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.214264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.214290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.218793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.219153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.219179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.223632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.223955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.223983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.228478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.228777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.228804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.233274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.233597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.233624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.237981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.238303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.693 [2024-10-13 11:20:07.238343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.693 [2024-10-13 11:20:07.242781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.693 [2024-10-13 11:20:07.243130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.694 [2024-10-13 11:20:07.243156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.694 [2024-10-13 11:20:07.247693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.694 [2024-10-13 11:20:07.247971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.694 [2024-10-13 11:20:07.247998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.694 [2024-10-13 11:20:07.252371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.694 [2024-10-13 11:20:07.252700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.694 [2024-10-13 11:20:07.252729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.694 [2024-10-13 11:20:07.257170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.694 [2024-10-13 11:20:07.257520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.694 [2024-10-13 11:20:07.257548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.694 [2024-10-13 11:20:07.262077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.694 [2024-10-13 11:20:07.262363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.694 [2024-10-13 11:20:07.262401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.694 [2024-10-13 11:20:07.266783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.694 [2024-10-13 11:20:07.267132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.694 [2024-10-13 11:20:07.267158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.694 [2024-10-13 11:20:07.271607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.694 [2024-10-13 11:20:07.271883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.694 [2024-10-13 11:20:07.271909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.694 [2024-10-13 11:20:07.276344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.694 [2024-10-13 11:20:07.276636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.694 [2024-10-13 11:20:07.276663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.694 [2024-10-13 11:20:07.281104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.694 [2024-10-13 11:20:07.281455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.694 [2024-10-13 11:20:07.281484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.694 [2024-10-13 11:20:07.286013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.694 [2024-10-13 11:20:07.286370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.694 [2024-10-13 11:20:07.286411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.291525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.291842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.291870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.297011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.297370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.297412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.301791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.302067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.302094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.306563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.306895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.306924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.311492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.311775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.311802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.316150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.316458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.316485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.320941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.321220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.321247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.325743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.326052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.326079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.330573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.330913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.330941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.335482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.335765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.335791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.340195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.340514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.340537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.344934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.345253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.345281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.349727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.350024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.350050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.354414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.354719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.354761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.359227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.359536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.359563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.364417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.364808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.364835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.369669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.369986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.370013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.374675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.375084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.375139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.379915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.380198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.954 [2024-10-13 11:20:07.380225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.954 [2024-10-13 11:20:07.384895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.954 [2024-10-13 11:20:07.385198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.385228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.389766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.390047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.390074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.394471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.394797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.394827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.399333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.399635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.399663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.404090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.404442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.404486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.408929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.409205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.409232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.413688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.413994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.414022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.418533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.418862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.418891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.423251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.423539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.423565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.428019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.428302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.428352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.432866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.433140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.433165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.437643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.437937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.437964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.442352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.442631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.442657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.447184] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.447495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.447522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.451988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.452266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.452293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.456888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.457165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.457191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.461838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.462124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.462151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.466519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.466855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.466886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.471252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.471542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.471569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.475974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.476249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.476275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.480779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.481055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.481082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.485495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.485794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.485821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.490321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.490657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.490693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.495299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.495674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.495732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.955 [2024-10-13 11:20:07.500636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.955 [2024-10-13 11:20:07.500976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.955 [2024-10-13 11:20:07.501003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.956 [2024-10-13 11:20:07.505990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.956 [2024-10-13 11:20:07.506315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.956 [2024-10-13 11:20:07.506386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.956 [2024-10-13 11:20:07.511411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.956 [2024-10-13 11:20:07.511759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.956 [2024-10-13 11:20:07.511787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.956 [2024-10-13 11:20:07.516577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.956 [2024-10-13 11:20:07.516903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.956 [2024-10-13 11:20:07.516929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.956 [2024-10-13 11:20:07.521716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.956 [2024-10-13 11:20:07.522071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.956 [2024-10-13 11:20:07.522099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.956 [2024-10-13 11:20:07.526898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.956 [2024-10-13 11:20:07.527239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.956 [2024-10-13 11:20:07.527266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.956 [2024-10-13 11:20:07.532074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.956 [2024-10-13 11:20:07.532387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.956 [2024-10-13 11:20:07.532427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.956 [2024-10-13 11:20:07.537038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.956 [2024-10-13 11:20:07.537318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.956 [2024-10-13 11:20:07.537370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.956 [2024-10-13 11:20:07.541825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.956 [2024-10-13 11:20:07.542102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.956 [2024-10-13 11:20:07.542129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.956 [2024-10-13 11:20:07.546505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:25.956 [2024-10-13 11:20:07.546841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.956 [2024-10-13 11:20:07.546870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.216 [2024-10-13 11:20:07.552012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.216 [2024-10-13 11:20:07.552319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.216 [2024-10-13 11:20:07.552373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.216 [2024-10-13 11:20:07.557055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.216 [2024-10-13 11:20:07.557452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.216 [2024-10-13 11:20:07.557492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.216 [2024-10-13 11:20:07.561912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.216 [2024-10-13 11:20:07.562188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.216 [2024-10-13 11:20:07.562215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.216 [2024-10-13 11:20:07.566728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.216 [2024-10-13 11:20:07.567059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.216 [2024-10-13 11:20:07.567100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.216 [2024-10-13 11:20:07.572047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.216 [2024-10-13 11:20:07.572322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.216 [2024-10-13 11:20:07.572374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.216 [2024-10-13 11:20:07.576792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.216 [2024-10-13 11:20:07.577088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.216 [2024-10-13 11:20:07.577115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.216 [2024-10-13 11:20:07.581652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.216 [2024-10-13 11:20:07.581945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.216 [2024-10-13 11:20:07.581972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.586288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.586580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.586607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.591071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.591388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.591423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.595954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.596281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.596310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.600752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.601029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.601055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.605519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.605843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.605868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.610925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.611243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.611285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.616120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.616460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.616489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.621299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.621658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.621688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.626555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.626871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.626900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.631873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.632170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.632197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.637094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.637407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.637448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.642434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.642764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.642794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.647556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.647887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.647928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.652811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.653087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.653113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.658061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.658382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.658421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.663404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.663763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.663789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.668799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.669082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.669109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.674045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.674384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.674424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.679351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.679681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.679740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.684624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.684952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.684978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.689801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.690087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.690114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.694910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.695222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.695249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.700237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.700596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.700625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.705244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.705597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.705625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.710275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.710612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.217 [2024-10-13 11:20:07.710639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.217 [2024-10-13 11:20:07.715138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.217 [2024-10-13 11:20:07.715456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.715484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.720301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.720659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.720687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.725204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.725550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.725579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.730123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.730472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.730500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.735055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.735343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.735382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.739895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.740212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.740240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.744857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.745159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.745187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.749619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.749936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.749963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.754518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.754835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.754863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.759267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.759592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.759619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.764440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.764729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.764772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.769267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.769615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.769643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.774395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.774669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.774720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.779300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.779630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.779657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.784467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.784755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.784783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.789279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.789625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.789653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.794504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.794832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.794861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.799417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.799701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.799727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.804320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.804643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.804669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.218 [2024-10-13 11:20:07.809070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2386100) with pdu=0x2000190fef90 00:16:26.218 [2024-10-13 11:20:07.809377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.218 [2024-10-13 11:20:07.809403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.218 00:16:26.218 Latency(us) 00:16:26.218 [2024-10-13T11:20:07.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.218 [2024-10-13T11:20:07.820Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:26.218 nvme0n1 : 2.00 6264.33 783.04 0.00 0.00 2548.76 2010.76 10664.49 00:16:26.218 [2024-10-13T11:20:07.820Z] =================================================================================================================== 00:16:26.218 [2024-10-13T11:20:07.820Z] Total : 6264.33 783.04 0.00 0.00 2548.76 2010.76 10664.49 00:16:26.477 0 00:16:26.477 11:20:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:26.477 11:20:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:26.477 | .driver_specific 00:16:26.477 | .nvme_error 00:16:26.477 | .status_code 00:16:26.477 | .command_transient_transport_error' 00:16:26.477 11:20:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:26.477 11:20:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:26.739 11:20:08 -- host/digest.sh@71 -- # (( 404 > 0 )) 00:16:26.739 11:20:08 -- host/digest.sh@73 -- # killprocess 71700 00:16:26.739 11:20:08 -- common/autotest_common.sh@926 -- # '[' -z 71700 ']' 00:16:26.739 11:20:08 -- common/autotest_common.sh@930 -- # kill -0 71700 00:16:26.739 11:20:08 -- common/autotest_common.sh@931 -- # uname 00:16:26.739 11:20:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:26.739 11:20:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71700 00:16:26.739 11:20:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:26.739 11:20:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:26.739 11:20:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71700' 00:16:26.739 killing process with pid 71700 00:16:26.739 Received shutdown signal, test time was about 2.000000 seconds 00:16:26.739 00:16:26.739 Latency(us) 00:16:26.739 [2024-10-13T11:20:08.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.739 [2024-10-13T11:20:08.341Z] =================================================================================================================== 00:16:26.739 [2024-10-13T11:20:08.341Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:26.739 11:20:08 -- common/autotest_common.sh@945 -- # kill 71700 00:16:26.739 11:20:08 -- common/autotest_common.sh@950 -- # wait 71700 00:16:26.739 11:20:08 -- host/digest.sh@115 -- # killprocess 71513 00:16:26.739 11:20:08 -- common/autotest_common.sh@926 -- # '[' -z 71513 ']' 00:16:26.739 11:20:08 -- common/autotest_common.sh@930 -- # kill -0 71513 00:16:26.740 11:20:08 -- common/autotest_common.sh@931 -- # uname 00:16:26.740 11:20:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:26.740 11:20:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71513 00:16:27.013 11:20:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:27.013 killing process with pid 71513 00:16:27.013 11:20:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:27.013 11:20:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71513' 00:16:27.013 11:20:08 -- common/autotest_common.sh@945 -- # kill 71513 00:16:27.013 11:20:08 -- common/autotest_common.sh@950 -- # wait 71513 00:16:27.013 00:16:27.013 real 0m16.551s 00:16:27.013 user 0m32.343s 00:16:27.013 sys 0m4.483s 00:16:27.013 11:20:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.013 11:20:08 -- common/autotest_common.sh@10 -- # set +x 00:16:27.013 ************************************ 00:16:27.013 END TEST nvmf_digest_error 00:16:27.013 ************************************ 00:16:27.013 11:20:08 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:16:27.013 11:20:08 -- host/digest.sh@139 -- # nvmftestfini 00:16:27.013 11:20:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:27.013 11:20:08 -- nvmf/common.sh@116 -- # sync 00:16:27.272 11:20:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:27.272 11:20:08 -- nvmf/common.sh@119 -- # set +e 00:16:27.272 11:20:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:27.272 11:20:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:27.272 rmmod nvme_tcp 00:16:27.272 rmmod nvme_fabrics 00:16:27.272 rmmod nvme_keyring 00:16:27.272 11:20:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:27.272 11:20:08 -- nvmf/common.sh@123 -- # set -e 00:16:27.272 11:20:08 -- nvmf/common.sh@124 -- # return 0 00:16:27.272 11:20:08 -- nvmf/common.sh@477 -- # '[' -n 71513 ']' 00:16:27.272 11:20:08 -- nvmf/common.sh@478 -- # killprocess 71513 00:16:27.272 11:20:08 -- common/autotest_common.sh@926 -- # '[' -z 71513 ']' 00:16:27.272 11:20:08 -- common/autotest_common.sh@930 -- # kill -0 71513 00:16:27.272 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (71513) - No such process 00:16:27.272 Process with pid 71513 is not found 00:16:27.272 11:20:08 -- common/autotest_common.sh@953 -- # echo 'Process with pid 71513 is not found' 00:16:27.272 11:20:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:27.272 11:20:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:27.272 11:20:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:27.272 11:20:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:27.272 11:20:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:27.272 11:20:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.272 11:20:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.272 11:20:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.272 11:20:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:27.272 00:16:27.272 real 0m32.016s 00:16:27.272 user 1m1.051s 00:16:27.272 sys 0m9.034s 00:16:27.272 11:20:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.272 11:20:08 -- common/autotest_common.sh@10 -- # set +x 00:16:27.272 ************************************ 00:16:27.272 END TEST nvmf_digest 00:16:27.272 ************************************ 00:16:27.272 11:20:08 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:16:27.272 11:20:08 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:16:27.272 11:20:08 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:27.272 11:20:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:27.272 11:20:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:27.272 11:20:08 -- common/autotest_common.sh@10 -- # set +x 00:16:27.272 ************************************ 00:16:27.272 START TEST nvmf_multipath 00:16:27.272 ************************************ 00:16:27.272 11:20:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:27.272 * Looking for test storage... 00:16:27.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:27.272 11:20:08 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:27.272 11:20:08 -- nvmf/common.sh@7 -- # uname -s 00:16:27.272 11:20:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.272 11:20:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.272 11:20:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.272 11:20:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.272 11:20:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.272 11:20:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.272 11:20:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.272 11:20:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.272 11:20:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.272 11:20:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.272 11:20:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:16:27.272 11:20:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:16:27.272 11:20:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.272 11:20:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.272 11:20:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:27.273 11:20:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:27.273 11:20:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.273 11:20:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.273 11:20:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.273 11:20:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.273 11:20:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.273 11:20:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.273 11:20:08 -- paths/export.sh@5 -- # export PATH 00:16:27.273 11:20:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.273 11:20:08 -- nvmf/common.sh@46 -- # : 0 00:16:27.273 11:20:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:27.273 11:20:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:27.273 11:20:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:27.273 11:20:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.273 11:20:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.273 11:20:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:27.273 11:20:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:27.273 11:20:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:27.273 11:20:08 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:27.273 11:20:08 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:27.273 11:20:08 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:27.273 11:20:08 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:27.273 11:20:08 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:27.273 11:20:08 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:27.273 11:20:08 -- host/multipath.sh@30 -- # nvmftestinit 00:16:27.273 11:20:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:27.273 11:20:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.273 11:20:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:27.273 11:20:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:27.273 11:20:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:27.273 11:20:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.273 11:20:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.273 11:20:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.273 11:20:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:27.273 11:20:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:27.273 11:20:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:27.273 11:20:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:27.273 11:20:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:27.273 11:20:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:27.273 11:20:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.273 11:20:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.273 11:20:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:27.273 11:20:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:27.273 11:20:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:27.273 11:20:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:27.273 11:20:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:27.273 11:20:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.273 11:20:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:27.273 11:20:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:27.273 11:20:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:27.273 11:20:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:27.273 11:20:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:27.532 11:20:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:27.532 Cannot find device "nvmf_tgt_br" 00:16:27.532 11:20:08 -- nvmf/common.sh@154 -- # true 00:16:27.532 11:20:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:27.532 Cannot find device "nvmf_tgt_br2" 00:16:27.532 11:20:08 -- nvmf/common.sh@155 -- # true 00:16:27.532 11:20:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:27.532 11:20:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:27.532 Cannot find device "nvmf_tgt_br" 00:16:27.532 11:20:08 -- nvmf/common.sh@157 -- # true 00:16:27.532 11:20:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:27.532 Cannot find device "nvmf_tgt_br2" 00:16:27.532 11:20:08 -- nvmf/common.sh@158 -- # true 00:16:27.532 11:20:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:27.532 11:20:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:27.532 11:20:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.532 11:20:09 -- nvmf/common.sh@161 -- # true 00:16:27.532 11:20:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.532 11:20:09 -- nvmf/common.sh@162 -- # true 00:16:27.532 11:20:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:27.532 11:20:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:27.532 11:20:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:27.532 11:20:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:27.532 11:20:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:27.532 11:20:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:27.532 11:20:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:27.532 11:20:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:27.532 11:20:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:27.532 11:20:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:27.532 11:20:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:27.532 11:20:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:27.532 11:20:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:27.532 11:20:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.532 11:20:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:27.791 11:20:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:27.791 11:20:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:27.791 11:20:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:27.791 11:20:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:27.791 11:20:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:27.791 11:20:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.791 11:20:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.791 11:20:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.791 11:20:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:27.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:16:27.791 00:16:27.791 --- 10.0.0.2 ping statistics --- 00:16:27.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.791 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:27.791 11:20:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:27.791 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.791 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:16:27.791 00:16:27.791 --- 10.0.0.3 ping statistics --- 00:16:27.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.791 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:27.791 11:20:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:27.791 00:16:27.791 --- 10.0.0.1 ping statistics --- 00:16:27.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.791 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:27.791 11:20:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.791 11:20:09 -- nvmf/common.sh@421 -- # return 0 00:16:27.791 11:20:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:27.791 11:20:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.791 11:20:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:27.791 11:20:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:27.791 11:20:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.791 11:20:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:27.791 11:20:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:27.791 11:20:09 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:16:27.791 11:20:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:27.791 11:20:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:27.791 11:20:09 -- common/autotest_common.sh@10 -- # set +x 00:16:27.791 11:20:09 -- nvmf/common.sh@469 -- # nvmfpid=71973 00:16:27.791 11:20:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:27.791 11:20:09 -- nvmf/common.sh@470 -- # waitforlisten 71973 00:16:27.791 11:20:09 -- common/autotest_common.sh@819 -- # '[' -z 71973 ']' 00:16:27.791 11:20:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.791 11:20:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:27.791 11:20:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.791 11:20:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:27.791 11:20:09 -- common/autotest_common.sh@10 -- # set +x 00:16:27.791 [2024-10-13 11:20:09.291563] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:27.791 [2024-10-13 11:20:09.291681] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.051 [2024-10-13 11:20:09.432052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:28.051 [2024-10-13 11:20:09.498587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:28.051 [2024-10-13 11:20:09.498787] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.051 [2024-10-13 11:20:09.498805] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.051 [2024-10-13 11:20:09.498826] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.051 [2024-10-13 11:20:09.499156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.051 [2024-10-13 11:20:09.499182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.987 11:20:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:28.987 11:20:10 -- common/autotest_common.sh@852 -- # return 0 00:16:28.987 11:20:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:28.987 11:20:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:28.987 11:20:10 -- common/autotest_common.sh@10 -- # set +x 00:16:28.987 11:20:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.987 11:20:10 -- host/multipath.sh@33 -- # nvmfapp_pid=71973 00:16:28.987 11:20:10 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:29.246 [2024-10-13 11:20:10.605665] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.246 11:20:10 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:29.246 Malloc0 00:16:29.505 11:20:10 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:29.764 11:20:11 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:29.764 11:20:11 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.023 [2024-10-13 11:20:11.610865] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.282 11:20:11 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:30.282 [2024-10-13 11:20:11.826905] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:30.282 11:20:11 -- host/multipath.sh@44 -- # bdevperf_pid=72029 00:16:30.282 11:20:11 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:30.282 11:20:11 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:30.282 11:20:11 -- host/multipath.sh@47 -- # waitforlisten 72029 /var/tmp/bdevperf.sock 00:16:30.282 11:20:11 -- common/autotest_common.sh@819 -- # '[' -z 72029 ']' 00:16:30.282 11:20:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:30.282 11:20:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:30.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:30.282 11:20:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:30.282 11:20:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:30.282 11:20:11 -- common/autotest_common.sh@10 -- # set +x 00:16:31.219 11:20:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:31.219 11:20:12 -- common/autotest_common.sh@852 -- # return 0 00:16:31.219 11:20:12 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:31.478 11:20:13 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:32.044 Nvme0n1 00:16:32.044 11:20:13 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:32.303 Nvme0n1 00:16:32.303 11:20:13 -- host/multipath.sh@78 -- # sleep 1 00:16:32.303 11:20:13 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:33.240 11:20:14 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:16:33.240 11:20:14 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:33.498 11:20:14 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:33.757 11:20:15 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:16:33.757 11:20:15 -- host/multipath.sh@65 -- # dtrace_pid=72075 00:16:33.757 11:20:15 -- host/multipath.sh@66 -- # sleep 6 00:16:33.757 11:20:15 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 71973 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:40.347 11:20:21 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:40.347 11:20:21 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:40.347 11:20:21 -- host/multipath.sh@67 -- # active_port=4421 00:16:40.347 11:20:21 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:40.347 Attaching 4 probes... 00:16:40.347 @path[10.0.0.2, 4421]: 19253 00:16:40.347 @path[10.0.0.2, 4421]: 19704 00:16:40.347 @path[10.0.0.2, 4421]: 19797 00:16:40.347 @path[10.0.0.2, 4421]: 19845 00:16:40.347 @path[10.0.0.2, 4421]: 20264 00:16:40.347 11:20:21 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:40.347 11:20:21 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:40.347 11:20:21 -- host/multipath.sh@69 -- # sed -n 1p 00:16:40.347 11:20:21 -- host/multipath.sh@69 -- # port=4421 00:16:40.347 11:20:21 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:40.347 11:20:21 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:40.347 11:20:21 -- host/multipath.sh@72 -- # kill 72075 00:16:40.347 11:20:21 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:40.347 11:20:21 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:16:40.347 11:20:21 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:40.347 11:20:21 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:40.606 11:20:22 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:16:40.606 11:20:22 -- host/multipath.sh@65 -- # dtrace_pid=72194 00:16:40.606 11:20:22 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 71973 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:40.606 11:20:22 -- host/multipath.sh@66 -- # sleep 6 00:16:47.199 11:20:28 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:47.199 11:20:28 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:16:47.199 11:20:28 -- host/multipath.sh@67 -- # active_port=4420 00:16:47.199 11:20:28 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:47.199 Attaching 4 probes... 00:16:47.199 @path[10.0.0.2, 4420]: 19823 00:16:47.199 @path[10.0.0.2, 4420]: 20020 00:16:47.199 @path[10.0.0.2, 4420]: 19852 00:16:47.199 @path[10.0.0.2, 4420]: 20034 00:16:47.199 @path[10.0.0.2, 4420]: 20173 00:16:47.199 11:20:28 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:47.200 11:20:28 -- host/multipath.sh@69 -- # sed -n 1p 00:16:47.200 11:20:28 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:47.200 11:20:28 -- host/multipath.sh@69 -- # port=4420 00:16:47.200 11:20:28 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:16:47.200 11:20:28 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:16:47.200 11:20:28 -- host/multipath.sh@72 -- # kill 72194 00:16:47.200 11:20:28 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:47.200 11:20:28 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:16:47.200 11:20:28 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:47.200 11:20:28 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:47.459 11:20:29 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:16:47.459 11:20:29 -- host/multipath.sh@65 -- # dtrace_pid=72314 00:16:47.459 11:20:29 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 71973 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:47.459 11:20:29 -- host/multipath.sh@66 -- # sleep 6 00:16:54.025 11:20:35 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:54.025 11:20:35 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:54.025 11:20:35 -- host/multipath.sh@67 -- # active_port=4421 00:16:54.025 11:20:35 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:54.025 Attaching 4 probes... 00:16:54.025 @path[10.0.0.2, 4421]: 15411 00:16:54.025 @path[10.0.0.2, 4421]: 19847 00:16:54.025 @path[10.0.0.2, 4421]: 19620 00:16:54.025 @path[10.0.0.2, 4421]: 19384 00:16:54.025 @path[10.0.0.2, 4421]: 19467 00:16:54.025 11:20:35 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:54.025 11:20:35 -- host/multipath.sh@69 -- # sed -n 1p 00:16:54.025 11:20:35 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:54.025 11:20:35 -- host/multipath.sh@69 -- # port=4421 00:16:54.025 11:20:35 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:54.025 11:20:35 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:54.025 11:20:35 -- host/multipath.sh@72 -- # kill 72314 00:16:54.025 11:20:35 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:54.025 11:20:35 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:16:54.025 11:20:35 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:54.025 11:20:35 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:54.284 11:20:35 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:16:54.284 11:20:35 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 71973 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:54.284 11:20:35 -- host/multipath.sh@65 -- # dtrace_pid=72425 00:16:54.284 11:20:35 -- host/multipath.sh@66 -- # sleep 6 00:17:00.855 11:20:41 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:00.855 11:20:41 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:00.855 11:20:42 -- host/multipath.sh@67 -- # active_port= 00:17:00.855 11:20:42 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:00.855 Attaching 4 probes... 00:17:00.855 00:17:00.855 00:17:00.855 00:17:00.855 00:17:00.855 00:17:00.855 11:20:42 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:00.855 11:20:42 -- host/multipath.sh@69 -- # sed -n 1p 00:17:00.855 11:20:42 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:00.855 11:20:42 -- host/multipath.sh@69 -- # port= 00:17:00.855 11:20:42 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:00.855 11:20:42 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:00.855 11:20:42 -- host/multipath.sh@72 -- # kill 72425 00:17:00.855 11:20:42 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:00.855 11:20:42 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:00.855 11:20:42 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:00.855 11:20:42 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:01.114 11:20:42 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:01.114 11:20:42 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 71973 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:01.114 11:20:42 -- host/multipath.sh@65 -- # dtrace_pid=72540 00:17:01.114 11:20:42 -- host/multipath.sh@66 -- # sleep 6 00:17:07.682 11:20:48 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:07.682 11:20:48 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:07.682 11:20:48 -- host/multipath.sh@67 -- # active_port=4421 00:17:07.682 11:20:48 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:07.682 Attaching 4 probes... 00:17:07.682 @path[10.0.0.2, 4421]: 18855 00:17:07.682 @path[10.0.0.2, 4421]: 19328 00:17:07.682 @path[10.0.0.2, 4421]: 19335 00:17:07.682 @path[10.0.0.2, 4421]: 19446 00:17:07.682 @path[10.0.0.2, 4421]: 19247 00:17:07.682 11:20:48 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:07.682 11:20:48 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:07.682 11:20:48 -- host/multipath.sh@69 -- # sed -n 1p 00:17:07.682 11:20:48 -- host/multipath.sh@69 -- # port=4421 00:17:07.682 11:20:48 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:07.682 11:20:48 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:07.682 11:20:48 -- host/multipath.sh@72 -- # kill 72540 00:17:07.682 11:20:48 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:07.682 11:20:48 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:07.682 [2024-10-13 11:20:49.097461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 [2024-10-13 11:20:49.097798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d230 is same with the state(5) to be set 00:17:07.682 11:20:49 -- host/multipath.sh@101 -- # sleep 1 00:17:08.618 11:20:50 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:08.618 11:20:50 -- host/multipath.sh@65 -- # dtrace_pid=72663 00:17:08.618 11:20:50 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 71973 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:08.618 11:20:50 -- host/multipath.sh@66 -- # sleep 6 00:17:15.188 11:20:56 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:15.188 11:20:56 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:15.188 11:20:56 -- host/multipath.sh@67 -- # active_port=4420 00:17:15.188 11:20:56 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:15.188 Attaching 4 probes... 00:17:15.188 @path[10.0.0.2, 4420]: 19803 00:17:15.188 @path[10.0.0.2, 4420]: 19568 00:17:15.188 @path[10.0.0.2, 4420]: 19237 00:17:15.188 @path[10.0.0.2, 4420]: 19449 00:17:15.188 @path[10.0.0.2, 4420]: 19205 00:17:15.188 11:20:56 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:15.188 11:20:56 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:15.188 11:20:56 -- host/multipath.sh@69 -- # sed -n 1p 00:17:15.188 11:20:56 -- host/multipath.sh@69 -- # port=4420 00:17:15.188 11:20:56 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:15.188 11:20:56 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:15.188 11:20:56 -- host/multipath.sh@72 -- # kill 72663 00:17:15.188 11:20:56 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:15.188 11:20:56 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:15.188 [2024-10-13 11:20:56.657934] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:15.188 11:20:56 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:15.447 11:20:56 -- host/multipath.sh@111 -- # sleep 6 00:17:22.013 11:21:02 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:22.013 11:21:02 -- host/multipath.sh@65 -- # dtrace_pid=72843 00:17:22.013 11:21:02 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 71973 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:22.013 11:21:02 -- host/multipath.sh@66 -- # sleep 6 00:17:28.595 11:21:08 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:28.595 11:21:08 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:28.595 11:21:09 -- host/multipath.sh@67 -- # active_port=4421 00:17:28.595 11:21:09 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.595 Attaching 4 probes... 00:17:28.595 @path[10.0.0.2, 4421]: 18658 00:17:28.595 @path[10.0.0.2, 4421]: 19050 00:17:28.595 @path[10.0.0.2, 4421]: 19258 00:17:28.595 @path[10.0.0.2, 4421]: 18871 00:17:28.595 @path[10.0.0.2, 4421]: 18945 00:17:28.595 11:21:09 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:28.595 11:21:09 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:28.595 11:21:09 -- host/multipath.sh@69 -- # sed -n 1p 00:17:28.595 11:21:09 -- host/multipath.sh@69 -- # port=4421 00:17:28.595 11:21:09 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:28.595 11:21:09 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:28.595 11:21:09 -- host/multipath.sh@72 -- # kill 72843 00:17:28.595 11:21:09 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.595 11:21:09 -- host/multipath.sh@114 -- # killprocess 72029 00:17:28.595 11:21:09 -- common/autotest_common.sh@926 -- # '[' -z 72029 ']' 00:17:28.595 11:21:09 -- common/autotest_common.sh@930 -- # kill -0 72029 00:17:28.595 11:21:09 -- common/autotest_common.sh@931 -- # uname 00:17:28.595 11:21:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:28.595 11:21:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72029 00:17:28.595 killing process with pid 72029 00:17:28.595 11:21:09 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:28.595 11:21:09 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:28.595 11:21:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72029' 00:17:28.595 11:21:09 -- common/autotest_common.sh@945 -- # kill 72029 00:17:28.595 11:21:09 -- common/autotest_common.sh@950 -- # wait 72029 00:17:28.595 Connection closed with partial response: 00:17:28.595 00:17:28.595 00:17:28.595 11:21:09 -- host/multipath.sh@116 -- # wait 72029 00:17:28.595 11:21:09 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:28.595 [2024-10-13 11:20:11.889200] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:28.595 [2024-10-13 11:20:11.889309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72029 ] 00:17:28.595 [2024-10-13 11:20:12.023594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.595 [2024-10-13 11:20:12.110687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.595 Running I/O for 90 seconds... 00:17:28.595 [2024-10-13 11:20:22.183253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.595 [2024-10-13 11:20:22.183354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.595 [2024-10-13 11:20:22.183413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-10-13 11:20:22.183435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.183471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-10-13 11:20:22.183503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.183537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.183570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-10-13 11:20:22.183602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.183635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.183667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.183699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-10-13 11:20:22.183745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.183781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.183813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-10-13 11:20:22.183845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.183879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.183912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.183944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.183977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.183996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-10-13 11:20:22.184141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-10-13 11:20:22.184250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-10-13 11:20:22.184289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-10-13 11:20:22.184353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-10-13 11:20:22.184462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.596 [2024-10-13 11:20:22.184533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.596 [2024-10-13 11:20:22.184888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.596 [2024-10-13 11:20:22.184902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.184922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.184936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.184956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-10-13 11:20:22.184970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.184990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-10-13 11:20:22.185004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-10-13 11:20:22.185180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-10-13 11:20:22.185213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-10-13 11:20:22.185286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-10-13 11:20:22.185420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-10-13 11:20:22.185570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-10-13 11:20:22.185676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.185967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.185987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.186007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.186028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-10-13 11:20:22.186042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.186062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-10-13 11:20:22.186076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.186096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.186110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.186129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-10-13 11:20:22.186143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.186163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.597 [2024-10-13 11:20:22.186177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.186197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.597 [2024-10-13 11:20:22.186211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.597 [2024-10-13 11:20:22.186231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.186245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.186280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.186314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.186362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.186398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.186432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.186474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.186511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.186546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-10-13 11:20:22.186604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.186641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-10-13 11:20:22.186675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-10-13 11:20:22.186754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-10-13 11:20:22.186792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-10-13 11:20:22.186829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.186867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-10-13 11:20:22.186904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.186942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.186973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.186990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.187012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.187042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.187078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-10-13 11:20:22.187107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.187127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.187140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.187160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.187174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.187195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.187209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.187229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.187243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.187263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.187277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.187297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.187311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.187330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.187344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.187364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.187381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.188958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.188989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.189029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-10-13 11:20:22.189047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.189068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-10-13 11:20:22.189082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.189102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-10-13 11:20:22.189116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.189136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.189150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.189170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.189184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.189204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.598 [2024-10-13 11:20:22.189218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.189238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.189253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.189273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.189287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.189307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.598 [2024-10-13 11:20:22.189321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.598 [2024-10-13 11:20:22.189372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:22.189390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:22.189411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:22.189426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:22.189447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:22.189461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:22.189482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:22.189505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:22.189543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:22.189563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:22.189585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:22.189603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:22.189624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:22.189639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:22.189660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:22.189674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:22.189694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:22.189709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:22.189744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:22.189758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:22.189778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:22.189792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.736694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:28.736781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.736832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.736849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.736869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.736883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.736902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.736915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.736934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:28.736963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.736985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:28.737002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:28.737099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:28.737165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:28.737584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:28.737685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.599 [2024-10-13 11:20:28.737801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.599 [2024-10-13 11:20:28.737821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.599 [2024-10-13 11:20:28.737836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.737866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-10-13 11:20:28.737882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.737901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.737915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.737935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-10-13 11:20:28.737948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.737967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.737981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-10-13 11:20:28.738321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-10-13 11:20:28.738382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-10-13 11:20:28.738420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-10-13 11:20:28.738623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-10-13 11:20:28.738657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-10-13 11:20:28.738776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-10-13 11:20:28.738855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-10-13 11:20:28.738891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-10-13 11:20:28.738927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.600 [2024-10-13 11:20:28.738962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.738983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-10-13 11:20:28.739013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.600 [2024-10-13 11:20:28.739048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.600 [2024-10-13 11:20:28.739062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-10-13 11:20:28.739143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-10-13 11:20:28.739596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-10-13 11:20:28.739662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.739968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.739987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.740000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.740020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.740033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.740052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.740066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.740091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.740105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.740125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.740138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.740157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.740171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.740191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.740204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.740223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.740237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.740257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.740271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.740293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-10-13 11:20:28.740308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.740370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.601 [2024-10-13 11:20:28.740388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.740409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.740423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.740444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.740458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.601 [2024-10-13 11:20:28.740479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.601 [2024-10-13 11:20:28.740492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.740513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.740527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.740548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.740569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.740590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.740605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.740625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.740639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.740660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.740674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.740694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.740708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.740729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.740743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.740793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.740806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.740826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.740839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.740858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.740873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.740892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.740905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.740925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.740939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.740958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.740971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.740991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.741009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.741030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.741044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.741063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.741077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.742333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.742362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.742389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.742406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.742426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.742440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.742460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.742474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.742494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.742508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.742528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.742542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.742562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.742576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.742596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.742610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.742630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.742644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.742664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.742678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.742739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.742757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.743241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.743267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.743291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.743307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.743327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.743341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.743361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.743375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.743411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.743426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.743446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.743460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.743479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.743493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.743513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.743527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.743547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.602 [2024-10-13 11:20:28.743560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.743580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.602 [2024-10-13 11:20:28.743594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.602 [2024-10-13 11:20:28.743614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-10-13 11:20:28.743630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.743662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.743677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.743697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.743710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.743730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.743744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.743764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.743778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.743798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.743811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.743831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.743844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.743865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.743878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.743898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.743912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.743932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.743946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.743965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.743979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.743998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-10-13 11:20:28.744012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-10-13 11:20:28.744120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-10-13 11:20:28.744225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-10-13 11:20:28.744311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-10-13 11:20:28.744431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-10-13 11:20:28.744865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-10-13 11:20:28.744899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.603 [2024-10-13 11:20:28.744932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.744986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.744999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.745019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.745032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.745059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.745074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.745094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.603 [2024-10-13 11:20:28.745107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.603 [2024-10-13 11:20:28.745127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-10-13 11:20:28.745141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-10-13 11:20:28.745174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.745208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-10-13 11:20:28.745242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.745276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-10-13 11:20:28.745314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-10-13 11:20:28.745347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-10-13 11:20:28.745397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.745435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-10-13 11:20:28.745469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-10-13 11:20:28.745512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.745546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-10-13 11:20:28.745579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.745612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.745646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.745680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.745699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.755905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.755959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.755980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.756015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.756050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.756083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.756117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.756165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.756200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.756234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-10-13 11:20:28.756269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.756303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.604 [2024-10-13 11:20:28.756387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.756424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.756459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.756496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.756533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.604 [2024-10-13 11:20:28.756554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.604 [2024-10-13 11:20:28.756568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.756590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.756604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.756625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.756648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.756670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.756685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.756706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.756721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.756757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.756785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.756805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.756819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.756839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.756852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.756872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.756886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.756906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.756920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.756940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.756953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.756973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.756987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-10-13 11:20:28.757117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-10-13 11:20:28.757155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-10-13 11:20:28.757343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-10-13 11:20:28.757505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-10-13 11:20:28.757540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-10-13 11:20:28.757575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-10-13 11:20:28.757619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.757927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-10-13 11:20:28.757961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.757981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-10-13 11:20:28.757995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.758016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.758030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.758056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.605 [2024-10-13 11:20:28.758086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.758106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.605 [2024-10-13 11:20:28.758121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.605 [2024-10-13 11:20:28.758141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-10-13 11:20:28.758155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.758174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.758188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.758208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-10-13 11:20:28.758222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.758242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-10-13 11:20:28.758256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.760266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-10-13 11:20:28.760309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.760393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-10-13 11:20:28.760417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.760447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.760467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.760497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.760516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.760545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.760564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.760593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-10-13 11:20:28.760613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.760649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-10-13 11:20:28.760684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.760716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.760735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.760765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.760784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.760815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-10-13 11:20:28.760835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.760864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.760892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.760922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-10-13 11:20:28.760941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.760970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.760990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.761038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.761087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.761135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.761184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.761243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.761301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.761389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.761438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.761488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-10-13 11:20:28.761536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.761585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.761634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-10-13 11:20:28.761714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.761784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.761832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-10-13 11:20:28.761881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.761929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.761958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-10-13 11:20:28.761977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.762017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.762039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.762068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.606 [2024-10-13 11:20:28.762088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.762116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.762136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.762166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.762185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.762214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.762233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.762262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.606 [2024-10-13 11:20:28.762281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.606 [2024-10-13 11:20:28.762311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.762351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.762392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.762412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.762442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.762461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.762490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.762510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.762539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.762558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.762587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.762607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.762645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-10-13 11:20:28.762666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.762708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-10-13 11:20:28.762741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.762771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-10-13 11:20:28.762790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.762819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.762839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.762868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.762888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.762917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.762936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.762965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.762985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.763055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-10-13 11:20:28.763114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-10-13 11:20:28.763170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.763218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-10-13 11:20:28.763267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.763351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-10-13 11:20:28.763415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-10-13 11:20:28.763464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-10-13 11:20:28.763513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.763561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-10-13 11:20:28.763610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-10-13 11:20:28.763669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.763725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-10-13 11:20:28.763773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.763822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.763870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.763919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.763948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.763975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.764006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.764026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.764055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.764074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.764103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.764122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.764152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.764172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.764201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.764220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.764250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.764269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.764298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.764317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.764381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.607 [2024-10-13 11:20:28.764401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.764436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.607 [2024-10-13 11:20:28.764457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.607 [2024-10-13 11:20:28.764486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.764506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.764535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.608 [2024-10-13 11:20:28.764554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.764583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.764603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.764642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.764662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.764703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.764733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.764772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.764791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.764821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.764841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.764870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.764889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.764918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.764937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.764967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.764986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.608 [2024-10-13 11:20:28.765574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.608 [2024-10-13 11:20:28.765622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.608 [2024-10-13 11:20:28.765866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.765955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.765974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.766003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.766022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.766052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.608 [2024-10-13 11:20:28.766071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.766100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.608 [2024-10-13 11:20:28.766119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.766148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.608 [2024-10-13 11:20:28.766167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.766197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.608 [2024-10-13 11:20:28.766216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.766245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.766265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.766294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.608 [2024-10-13 11:20:28.766313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.608 [2024-10-13 11:20:28.766377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.766399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.766429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.766448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.766477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.766496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.766525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.766552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.766583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.766603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.766632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.766652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.766681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.609 [2024-10-13 11:20:28.766711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.766752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.609 [2024-10-13 11:20:28.766772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.766801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.766821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.766849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.766869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.766897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.609 [2024-10-13 11:20:28.766917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.766946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.609 [2024-10-13 11:20:28.766966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.766994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.767014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.767051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.609 [2024-10-13 11:20:28.767071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.609 [2024-10-13 11:20:28.769181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.609 [2024-10-13 11:20:28.769277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.609 [2024-10-13 11:20:28.769373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.769428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.769477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.769525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.609 [2024-10-13 11:20:28.769575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.609 [2024-10-13 11:20:28.769624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.769672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.769723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.609 [2024-10-13 11:20:28.769780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.769829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.609 [2024-10-13 11:20:28.769877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.769925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.769966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.769986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.770015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.770035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.770063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.770083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.770112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.770131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.770160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.770180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.770210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.770230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.770259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.770278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.770307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.770344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.770377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.770398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.770454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.609 [2024-10-13 11:20:28.770480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.770510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.770530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.770559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.609 [2024-10-13 11:20:28.770578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.770623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.609 [2024-10-13 11:20:28.770644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.609 [2024-10-13 11:20:28.770674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.770693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.770752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.770772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.770801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.770820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.770850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.770870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.770898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.770918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.770947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.770966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.770995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.771015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.771485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.771520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.771554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.771789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.771823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.771890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.771923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.771957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.771976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.771990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.772010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.772023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.772043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.772057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.772077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.772091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.772110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.772124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.772150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.772165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.772199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.610 [2024-10-13 11:20:28.772218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.772239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.772253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.772273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.772287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.610 [2024-10-13 11:20:28.772307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.610 [2024-10-13 11:20:28.772320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.772406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.772443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.772478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.772514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.772549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.772584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.772620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.772666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.772701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.611 [2024-10-13 11:20:28.772766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.772799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.611 [2024-10-13 11:20:28.772833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.772866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.772900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.772935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.772971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.772984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.611 [2024-10-13 11:20:28.773551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.611 [2024-10-13 11:20:28.773594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.611 [2024-10-13 11:20:28.773814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.611 [2024-10-13 11:20:28.773865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.611 [2024-10-13 11:20:28.773879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.773898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:28.773911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.773931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.612 [2024-10-13 11:20:28.773944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.773963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.612 [2024-10-13 11:20:28.773977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.773999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.612 [2024-10-13 11:20:28.774015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.612 [2024-10-13 11:20:28.774053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:28.774087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:28.774119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:28.774156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:28.774188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:28.774221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:28.774253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:28.774285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:28.774318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.612 [2024-10-13 11:20:28.774367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.612 [2024-10-13 11:20:28.774415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:28.774451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:28.774485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.612 [2024-10-13 11:20:28.774527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.612 [2024-10-13 11:20:28.774561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.774581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:28.774595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:28.775005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.612 [2024-10-13 11:20:28.775048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.824397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.612 [2024-10-13 11:20:35.824464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.824537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.824558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.824580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.824594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.824615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.824629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.824649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.612 [2024-10-13 11:20:35.824663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.824683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.824698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.824733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.824746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.824766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.612 [2024-10-13 11:20:35.824779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.824817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.824832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.824853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.824866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.824886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.824899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.824919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.824932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.824952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.824965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.824984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.824997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.825017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.825030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.825049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.825062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.825083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.825096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.825118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.612 [2024-10-13 11:20:35.825132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.612 [2024-10-13 11:20:35.825152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.825182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.825218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.825277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.825314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.825349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.825418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.825459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.825497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.825535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.825573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.825611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.825679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.825719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.825756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.825803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.825843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.825881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.825918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.825956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.825978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.825993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.826016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.826031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.826053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.826068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.826090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.826105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.826127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.826142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.826164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.826180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.826217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.826231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.826253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.826273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.826296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.826311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.826348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.826381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.826424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.826446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.826476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.613 [2024-10-13 11:20:35.826491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.826515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.826530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.826553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.613 [2024-10-13 11:20:35.826568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.613 [2024-10-13 11:20:35.826590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.826605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.826627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.826642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.826664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.826679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.826712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.826730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.826753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.826768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.826790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.826805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.826837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.826854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.826877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.826892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.826914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.826929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.826951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.614 [2024-10-13 11:20:35.826966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.826988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.614 [2024-10-13 11:20:35.827103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.614 [2024-10-13 11:20:35.827210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.614 [2024-10-13 11:20:35.827464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.614 [2024-10-13 11:20:35.827500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.827889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.614 [2024-10-13 11:20:35.827924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.614 [2024-10-13 11:20:35.827964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.827985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.828000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.828021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.828035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.828056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.614 [2024-10-13 11:20:35.828070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.828091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.614 [2024-10-13 11:20:35.828105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.828126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.614 [2024-10-13 11:20:35.828139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.614 [2024-10-13 11:20:35.828160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.615 [2024-10-13 11:20:35.828174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.828195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.828209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.828230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.828250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.828272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.828287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.828307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.615 [2024-10-13 11:20:35.828322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.828343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.828375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.828398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.615 [2024-10-13 11:20:35.828413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.828434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.828449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.828469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.828484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.828505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.828519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.828540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.828554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.828574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.828588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.828609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.828623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.828644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.828658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.828679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.828700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.829535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.829565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.829598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.829615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.829644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.615 [2024-10-13 11:20:35.829659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.829688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.615 [2024-10-13 11:20:35.829703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.829732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.615 [2024-10-13 11:20:35.829747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.829775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.829790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.829818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.615 [2024-10-13 11:20:35.829836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.829867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.615 [2024-10-13 11:20:35.829881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.829910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.829924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.829953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.615 [2024-10-13 11:20:35.829967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.829996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.830010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.830039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.615 [2024-10-13 11:20:35.830053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.830108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.615 [2024-10-13 11:20:35.830129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.830158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.830173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.830201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.830216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.830245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.830259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.830288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.830305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.830364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.615 [2024-10-13 11:20:35.830383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.830413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.615 [2024-10-13 11:20:35.830428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.830458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.615 [2024-10-13 11:20:35.830490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:35.830520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:35.830536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:49.097861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:49.097912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:49.097937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:49.097952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.615 [2024-10-13 11:20:49.097967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.615 [2024-10-13 11:20:49.097980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.616 [2024-10-13 11:20:49.098306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.616 [2024-10-13 11:20:49.098356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.616 [2024-10-13 11:20:49.098433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.616 [2024-10-13 11:20:49.098487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.616 [2024-10-13 11:20:49.098513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.616 [2024-10-13 11:20:49.098540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.616 [2024-10-13 11:20:49.098619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.098984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.098998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.099014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.099027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.099043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.099057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.099087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.616 [2024-10-13 11:20:49.099100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.099115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.616 [2024-10-13 11:20:49.099142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.099156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.616 [2024-10-13 11:20:49.099168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.099203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.616 [2024-10-13 11:20:49.099216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.099230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.616 [2024-10-13 11:20:49.099242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.099256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.616 [2024-10-13 11:20:49.099268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.616 [2024-10-13 11:20:49.099281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.617 [2024-10-13 11:20:49.099298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.617 [2024-10-13 11:20:49.099324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.617 [2024-10-13 11:20:49.099350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.617 [2024-10-13 11:20:49.099628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.617 [2024-10-13 11:20:49.099680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.617 [2024-10-13 11:20:49.099759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.617 [2024-10-13 11:20:49.099785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.617 [2024-10-13 11:20:49.099811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.617 [2024-10-13 11:20:49.099863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.617 [2024-10-13 11:20:49.099973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.099986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.099999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.100012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.100024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.100038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.100050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.100064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.100076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.100108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.100120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.100134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.100146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.100160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.100175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.100189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.100201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.100215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.617 [2024-10-13 11:20:49.100232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.100247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.617 [2024-10-13 11:20:49.100260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.617 [2024-10-13 11:20:49.100274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.100287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.100313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.100383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.100414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.100442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.100470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.100498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.100526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.100554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.100582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.100610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.100646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.100676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.100704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.100733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.100777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.100820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.100847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.100873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.100900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.100927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.100953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.100980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.100994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.101011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.101038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.101064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.101091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.101117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.101145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.101172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.101198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.101225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.101252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.101279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.618 [2024-10-13 11:20:49.101305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.101378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.101428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.101457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.101484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.618 [2024-10-13 11:20:49.101499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.618 [2024-10-13 11:20:49.101512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.619 [2024-10-13 11:20:49.101527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.619 [2024-10-13 11:20:49.101540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.619 [2024-10-13 11:20:49.101555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.619 [2024-10-13 11:20:49.101568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.619 [2024-10-13 11:20:49.101583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.619 [2024-10-13 11:20:49.101596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.619 [2024-10-13 11:20:49.101611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.619 [2024-10-13 11:20:49.101624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.619 [2024-10-13 11:20:49.101654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.619 [2024-10-13 11:20:49.101701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.619 [2024-10-13 11:20:49.101716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.619 [2024-10-13 11:20:49.101730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.619 [2024-10-13 11:20:49.101746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.619 [2024-10-13 11:20:49.101759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.619 [2024-10-13 11:20:49.101774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebdc50 is same with the state(5) to be set 00:17:28.619 [2024-10-13 11:20:49.101791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.619 [2024-10-13 11:20:49.101802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.619 [2024-10-13 11:20:49.101813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113664 len:8 PRP1 0x0 PRP2 0x0 00:17:28.619 [2024-10-13 11:20:49.101826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.619 [2024-10-13 11:20:49.101881] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xebdc50 was disconnected and freed. reset controller. 00:17:28.619 [2024-10-13 11:20:49.101992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.619 [2024-10-13 11:20:49.102021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.619 [2024-10-13 11:20:49.102037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.619 [2024-10-13 11:20:49.102051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.619 [2024-10-13 11:20:49.102065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.619 [2024-10-13 11:20:49.102079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.619 [2024-10-13 11:20:49.102093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.619 [2024-10-13 11:20:49.102107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.619 [2024-10-13 11:20:49.102120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9ab20 is same with the state(5) to be set 00:17:28.619 [2024-10-13 11:20:49.103358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:28.619 [2024-10-13 11:20:49.103397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9ab20 (9): Bad file descriptor 00:17:28.619 [2024-10-13 11:20:49.103719] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:28.619 [2024-10-13 11:20:49.103795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:28.619 [2024-10-13 11:20:49.103846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:28.619 [2024-10-13 11:20:49.103869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9ab20 with addr=10.0.0.2, port=4421 00:17:28.619 [2024-10-13 11:20:49.103885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9ab20 is same with the state(5) to be set 00:17:28.619 [2024-10-13 11:20:49.103919] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9ab20 (9): Bad file descriptor 00:17:28.619 [2024-10-13 11:20:49.103951] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:28.619 [2024-10-13 11:20:49.103970] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:28.619 [2024-10-13 11:20:49.103983] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:28.619 [2024-10-13 11:20:49.104015] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:28.619 [2024-10-13 11:20:49.104037] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:28.619 [2024-10-13 11:20:59.153849] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:28.619 Received shutdown signal, test time was about 55.507850 seconds 00:17:28.619 00:17:28.619 Latency(us) 00:17:28.619 [2024-10-13T11:21:10.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.619 [2024-10-13T11:21:10.221Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:28.619 Verification LBA range: start 0x0 length 0x4000 00:17:28.619 Nvme0n1 : 55.51 11097.31 43.35 0.00 0.00 11515.47 131.26 7015926.69 00:17:28.619 [2024-10-13T11:21:10.221Z] =================================================================================================================== 00:17:28.619 [2024-10-13T11:21:10.221Z] Total : 11097.31 43.35 0.00 0.00 11515.47 131.26 7015926.69 00:17:28.619 11:21:09 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.619 11:21:09 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:17:28.619 11:21:09 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:28.619 11:21:09 -- host/multipath.sh@125 -- # nvmftestfini 00:17:28.619 11:21:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:28.619 11:21:09 -- nvmf/common.sh@116 -- # sync 00:17:28.619 11:21:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:28.619 11:21:09 -- nvmf/common.sh@119 -- # set +e 00:17:28.619 11:21:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:28.619 11:21:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:28.619 rmmod nvme_tcp 00:17:28.619 rmmod nvme_fabrics 00:17:28.619 rmmod nvme_keyring 00:17:28.619 11:21:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:28.619 11:21:09 -- nvmf/common.sh@123 -- # set -e 00:17:28.619 11:21:09 -- nvmf/common.sh@124 -- # return 0 00:17:28.619 11:21:09 -- nvmf/common.sh@477 -- # '[' -n 71973 ']' 00:17:28.619 11:21:09 -- nvmf/common.sh@478 -- # killprocess 71973 00:17:28.619 11:21:09 -- common/autotest_common.sh@926 -- # '[' -z 71973 ']' 00:17:28.619 11:21:09 -- common/autotest_common.sh@930 -- # kill -0 71973 00:17:28.619 11:21:09 -- common/autotest_common.sh@931 -- # uname 00:17:28.619 11:21:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:28.619 11:21:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71973 00:17:28.619 11:21:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:28.619 11:21:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:28.619 killing process with pid 71973 00:17:28.619 11:21:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71973' 00:17:28.619 11:21:09 -- common/autotest_common.sh@945 -- # kill 71973 00:17:28.619 11:21:09 -- common/autotest_common.sh@950 -- # wait 71973 00:17:28.619 11:21:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:28.619 11:21:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:28.619 11:21:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:28.619 11:21:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.619 11:21:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:28.619 11:21:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.619 11:21:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.619 11:21:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.619 11:21:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:28.619 00:17:28.619 real 1m1.360s 00:17:28.619 user 2m49.791s 00:17:28.619 sys 0m18.667s 00:17:28.619 11:21:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.619 11:21:10 -- common/autotest_common.sh@10 -- # set +x 00:17:28.619 ************************************ 00:17:28.619 END TEST nvmf_multipath 00:17:28.619 ************************************ 00:17:28.619 11:21:10 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:28.619 11:21:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:28.619 11:21:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:28.619 11:21:10 -- common/autotest_common.sh@10 -- # set +x 00:17:28.619 ************************************ 00:17:28.619 START TEST nvmf_timeout 00:17:28.619 ************************************ 00:17:28.619 11:21:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:28.879 * Looking for test storage... 00:17:28.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:28.879 11:21:10 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:28.879 11:21:10 -- nvmf/common.sh@7 -- # uname -s 00:17:28.879 11:21:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.879 11:21:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.879 11:21:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.879 11:21:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.879 11:21:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.879 11:21:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.879 11:21:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.879 11:21:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.879 11:21:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.879 11:21:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.879 11:21:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:17:28.879 11:21:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:17:28.879 11:21:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.879 11:21:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.879 11:21:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:28.879 11:21:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:28.879 11:21:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.879 11:21:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.879 11:21:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.879 11:21:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.879 11:21:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.880 11:21:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.880 11:21:10 -- paths/export.sh@5 -- # export PATH 00:17:28.880 11:21:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.880 11:21:10 -- nvmf/common.sh@46 -- # : 0 00:17:28.880 11:21:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:28.880 11:21:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:28.880 11:21:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:28.880 11:21:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.880 11:21:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.880 11:21:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:28.880 11:21:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:28.880 11:21:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:28.880 11:21:10 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.880 11:21:10 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.880 11:21:10 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.880 11:21:10 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:28.880 11:21:10 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:28.880 11:21:10 -- host/timeout.sh@19 -- # nvmftestinit 00:17:28.880 11:21:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:28.880 11:21:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.880 11:21:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:28.880 11:21:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:28.880 11:21:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:28.880 11:21:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.880 11:21:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.880 11:21:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.880 11:21:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:28.880 11:21:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:28.880 11:21:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:28.880 11:21:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:28.880 11:21:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:28.880 11:21:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:28.880 11:21:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.880 11:21:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.880 11:21:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:28.880 11:21:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:28.880 11:21:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:28.880 11:21:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:28.880 11:21:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:28.880 11:21:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.880 11:21:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:28.880 11:21:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:28.880 11:21:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:28.880 11:21:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:28.880 11:21:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:28.880 11:21:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:28.880 Cannot find device "nvmf_tgt_br" 00:17:28.880 11:21:10 -- nvmf/common.sh@154 -- # true 00:17:28.880 11:21:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.880 Cannot find device "nvmf_tgt_br2" 00:17:28.880 11:21:10 -- nvmf/common.sh@155 -- # true 00:17:28.880 11:21:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:28.880 11:21:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:28.880 Cannot find device "nvmf_tgt_br" 00:17:28.880 11:21:10 -- nvmf/common.sh@157 -- # true 00:17:28.880 11:21:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:28.880 Cannot find device "nvmf_tgt_br2" 00:17:28.880 11:21:10 -- nvmf/common.sh@158 -- # true 00:17:28.880 11:21:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:28.880 11:21:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:28.880 11:21:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.880 11:21:10 -- nvmf/common.sh@161 -- # true 00:17:28.880 11:21:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.880 11:21:10 -- nvmf/common.sh@162 -- # true 00:17:28.880 11:21:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:28.880 11:21:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.880 11:21:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.880 11:21:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:28.880 11:21:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:28.880 11:21:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:29.139 11:21:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:29.139 11:21:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:29.139 11:21:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:29.139 11:21:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:29.139 11:21:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:29.139 11:21:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:29.139 11:21:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:29.139 11:21:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:29.139 11:21:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:29.139 11:21:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:29.139 11:21:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:29.139 11:21:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:29.139 11:21:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:29.139 11:21:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:29.139 11:21:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:29.139 11:21:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:29.139 11:21:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:29.139 11:21:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:29.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:17:29.139 00:17:29.139 --- 10.0.0.2 ping statistics --- 00:17:29.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.139 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:29.139 11:21:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:29.139 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:29.139 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:17:29.139 00:17:29.139 --- 10.0.0.3 ping statistics --- 00:17:29.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.139 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:29.139 11:21:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:29.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:29.139 00:17:29.139 --- 10.0.0.1 ping statistics --- 00:17:29.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.139 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:29.139 11:21:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.139 11:21:10 -- nvmf/common.sh@421 -- # return 0 00:17:29.139 11:21:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:29.139 11:21:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.139 11:21:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:29.139 11:21:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:29.139 11:21:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.139 11:21:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:29.139 11:21:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:29.139 11:21:10 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:17:29.139 11:21:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:29.139 11:21:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:29.139 11:21:10 -- common/autotest_common.sh@10 -- # set +x 00:17:29.139 11:21:10 -- nvmf/common.sh@469 -- # nvmfpid=73142 00:17:29.139 11:21:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:29.139 11:21:10 -- nvmf/common.sh@470 -- # waitforlisten 73142 00:17:29.139 11:21:10 -- common/autotest_common.sh@819 -- # '[' -z 73142 ']' 00:17:29.139 11:21:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.139 11:21:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.139 11:21:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.139 11:21:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.139 11:21:10 -- common/autotest_common.sh@10 -- # set +x 00:17:29.139 [2024-10-13 11:21:10.687975] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:29.139 [2024-10-13 11:21:10.688064] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.398 [2024-10-13 11:21:10.822543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:29.398 [2024-10-13 11:21:10.874353] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:29.398 [2024-10-13 11:21:10.874497] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.398 [2024-10-13 11:21:10.874508] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.398 [2024-10-13 11:21:10.874516] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.398 [2024-10-13 11:21:10.874679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.398 [2024-10-13 11:21:10.874686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.332 11:21:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.332 11:21:11 -- common/autotest_common.sh@852 -- # return 0 00:17:30.332 11:21:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:30.332 11:21:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:30.332 11:21:11 -- common/autotest_common.sh@10 -- # set +x 00:17:30.333 11:21:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.333 11:21:11 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:30.333 11:21:11 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:30.591 [2024-10-13 11:21:12.024929] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.591 11:21:12 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:30.851 Malloc0 00:17:30.851 11:21:12 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:31.111 11:21:12 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:31.398 11:21:12 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.657 [2024-10-13 11:21:13.120751] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.657 11:21:13 -- host/timeout.sh@32 -- # bdevperf_pid=73202 00:17:31.657 11:21:13 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:31.657 11:21:13 -- host/timeout.sh@34 -- # waitforlisten 73202 /var/tmp/bdevperf.sock 00:17:31.657 11:21:13 -- common/autotest_common.sh@819 -- # '[' -z 73202 ']' 00:17:31.657 11:21:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.657 11:21:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:31.657 11:21:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.657 11:21:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:31.657 11:21:13 -- common/autotest_common.sh@10 -- # set +x 00:17:31.657 [2024-10-13 11:21:13.198579] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:31.657 [2024-10-13 11:21:13.198707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73202 ] 00:17:31.915 [2024-10-13 11:21:13.338946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.915 [2024-10-13 11:21:13.408449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.852 11:21:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:32.852 11:21:14 -- common/autotest_common.sh@852 -- # return 0 00:17:32.852 11:21:14 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:32.853 11:21:14 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:17:33.111 NVMe0n1 00:17:33.111 11:21:14 -- host/timeout.sh@51 -- # rpc_pid=73220 00:17:33.111 11:21:14 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:33.111 11:21:14 -- host/timeout.sh@53 -- # sleep 1 00:17:33.370 Running I/O for 10 seconds... 00:17:34.308 11:21:15 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.308 [2024-10-13 11:21:15.898478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881480 is same with the state(5) to be set 00:17:34.308 [2024-10-13 11:21:15.898526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881480 is same with the state(5) to be set 00:17:34.308 [2024-10-13 11:21:15.898538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881480 is same with the state(5) to be set 00:17:34.308 [2024-10-13 11:21:15.898546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881480 is same with the state(5) to be set 00:17:34.308 [2024-10-13 11:21:15.898554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881480 is same with the state(5) to be set 00:17:34.308 [2024-10-13 11:21:15.898562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881480 is same with the state(5) to be set 00:17:34.308 [2024-10-13 11:21:15.898588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881480 is same with the state(5) to be set 00:17:34.308 [2024-10-13 11:21:15.898596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881480 is same with the state(5) to be set 00:17:34.308 [2024-10-13 11:21:15.898604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881480 is same with the state(5) to be set 00:17:34.308 [2024-10-13 11:21:15.898612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881480 is same with the state(5) to be set 00:17:34.308 [2024-10-13 11:21:15.898620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881480 is same with the state(5) to be set 00:17:34.308 [2024-10-13 11:21:15.898628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881480 is same with the state(5) to be set 00:17:34.308 [2024-10-13 11:21:15.898637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881480 is same with the state(5) to be set 00:17:34.308 [2024-10-13 11:21:15.898645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881480 is same with the state(5) to be set 00:17:34.308 [2024-10-13 11:21:15.899640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.900010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.900148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.900235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.900314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.900441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.900518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.900592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.900678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.900755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.900828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.900928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.901019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.901100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.901170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.901265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.901355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.901463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.901561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.901642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.901718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.901806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.901890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.901986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.902069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.902165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.902235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.902316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.902429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.902523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.902607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.308 [2024-10-13 11:21:15.902694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.902795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.308 [2024-10-13 11:21:15.902889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.902959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.308 [2024-10-13 11:21:15.903043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.903133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.903223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.903307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.308 [2024-10-13 11:21:15.903432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.903523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.903619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.903703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.903793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.903876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.903969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.904054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.308 [2024-10-13 11:21:15.904144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-10-13 11:21:15.904227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.904474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.904537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.904579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.904602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.904665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.904686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.904708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.904751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.904772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.904793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.904985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.904997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.905007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.905018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.905028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.905039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.905049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.905060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.905070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.905081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.905090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.905102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.905111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.905123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.905133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.905144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.905154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.905166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.905176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.905188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.309 [2024-10-13 11:21:15.905197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.309 [2024-10-13 11:21:15.905209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.309 [2024-10-13 11:21:15.905218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.310 [2024-10-13 11:21:15.905230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.310 [2024-10-13 11:21:15.905240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.310 [2024-10-13 11:21:15.905251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.310 [2024-10-13 11:21:15.905260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.310 [2024-10-13 11:21:15.905272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.310 [2024-10-13 11:21:15.905282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.310 [2024-10-13 11:21:15.905294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.310 [2024-10-13 11:21:15.905304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.905316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.908839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.908945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.909024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.909093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.909164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.909232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.909336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.909414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.909527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.909598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.909694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.909764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.570 [2024-10-13 11:21:15.909864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.909934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.910021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.910109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.570 [2024-10-13 11:21:15.910200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.910269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.910376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.910451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.910539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.910609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.910691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.910776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.570 [2024-10-13 11:21:15.910869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.910963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.911051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.911125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.911209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.911284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.911372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.911448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.911531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.911605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.911681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.911765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.911851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.911920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.570 [2024-10-13 11:21:15.912008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.912092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.570 [2024-10-13 11:21:15.912180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.912263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.912377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.912443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.570 [2024-10-13 11:21:15.912514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.912582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.570 [2024-10-13 11:21:15.912672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.570 [2024-10-13 11:21:15.912746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.912822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.912905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.912993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.913053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.913133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.913202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.913289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.913375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.913467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.913537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.913610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.913666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.913738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.913812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.913900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.913984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.914073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.914135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.571 [2024-10-13 11:21:15.914224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.914312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.914411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.914482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.571 [2024-10-13 11:21:15.914561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.914636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.914693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.914773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.914866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.914888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.914899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.914912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.571 [2024-10-13 11:21:15.914922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.914933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.914944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.914955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.914964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.914976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.571 [2024-10-13 11:21:15.914985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.914997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.915006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.571 [2024-10-13 11:21:15.915027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.571 [2024-10-13 11:21:15.915048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.571 [2024-10-13 11:21:15.915069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.915090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.571 [2024-10-13 11:21:15.915111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.915132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.915153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.915174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.915195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 11:21:15 -- host/timeout.sh@56 -- # sleep 2 00:17:34.571 [2024-10-13 11:21:15.915207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.915216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.915238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.915258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.571 [2024-10-13 11:21:15.915279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21790c0 is same with the state(5) to be set 00:17:34.571 [2024-10-13 11:21:15.915305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:34.571 [2024-10-13 11:21:15.915313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:34.571 [2024-10-13 11:21:15.915349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130800 len:8 PRP1 0x0 PRP2 0x0 00:17:34.571 [2024-10-13 11:21:15.915361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915408] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21790c0 was disconnected and freed. reset controller. 00:17:34.571 [2024-10-13 11:21:15.915518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.571 [2024-10-13 11:21:15.915543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.571 [2024-10-13 11:21:15.915564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.571 [2024-10-13 11:21:15.915584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.571 [2024-10-13 11:21:15.915603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.571 [2024-10-13 11:21:15.915612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116010 is same with the state(5) to be set 00:17:34.571 [2024-10-13 11:21:15.915837] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:34.571 [2024-10-13 11:21:15.915858] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116010 (9): Bad file descriptor 00:17:34.571 [2024-10-13 11:21:15.915955] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:34.571 [2024-10-13 11:21:15.916020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:34.571 [2024-10-13 11:21:15.916064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:34.571 [2024-10-13 11:21:15.916081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2116010 with addr=10.0.0.2, port=4420 00:17:34.572 [2024-10-13 11:21:15.916092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116010 is same with the state(5) to be set 00:17:34.572 [2024-10-13 11:21:15.916112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116010 (9): Bad file descriptor 00:17:34.572 [2024-10-13 11:21:15.916129] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:34.572 [2024-10-13 11:21:15.916138] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:34.572 [2024-10-13 11:21:15.916149] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:34.572 [2024-10-13 11:21:15.916169] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:34.572 [2024-10-13 11:21:15.916179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:36.476 [2024-10-13 11:21:17.916298] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.476 [2024-10-13 11:21:17.916425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.476 [2024-10-13 11:21:17.916467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.476 [2024-10-13 11:21:17.916483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2116010 with addr=10.0.0.2, port=4420 00:17:36.476 [2024-10-13 11:21:17.916496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116010 is same with the state(5) to be set 00:17:36.476 [2024-10-13 11:21:17.916520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116010 (9): Bad file descriptor 00:17:36.476 [2024-10-13 11:21:17.916549] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:36.476 [2024-10-13 11:21:17.916560] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:36.476 [2024-10-13 11:21:17.916570] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:36.476 [2024-10-13 11:21:17.916595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:36.477 [2024-10-13 11:21:17.916606] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:36.477 11:21:17 -- host/timeout.sh@57 -- # get_controller 00:17:36.477 11:21:17 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:36.477 11:21:17 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:36.737 11:21:18 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:17:36.737 11:21:18 -- host/timeout.sh@58 -- # get_bdev 00:17:36.737 11:21:18 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:36.737 11:21:18 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:36.996 11:21:18 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:17:36.996 11:21:18 -- host/timeout.sh@61 -- # sleep 5 00:17:38.374 [2024-10-13 11:21:19.916730] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.374 [2024-10-13 11:21:19.916844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.374 [2024-10-13 11:21:19.916886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.374 [2024-10-13 11:21:19.916901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2116010 with addr=10.0.0.2, port=4420 00:17:38.374 [2024-10-13 11:21:19.916915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116010 is same with the state(5) to be set 00:17:38.374 [2024-10-13 11:21:19.916941] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116010 (9): Bad file descriptor 00:17:38.374 [2024-10-13 11:21:19.916960] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:38.374 [2024-10-13 11:21:19.916969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:38.374 [2024-10-13 11:21:19.916979] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:38.374 [2024-10-13 11:21:19.917004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.374 [2024-10-13 11:21:19.917015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:40.907 [2024-10-13 11:21:21.917044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:40.907 [2024-10-13 11:21:21.917108] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:40.907 [2024-10-13 11:21:21.917119] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:40.907 [2024-10-13 11:21:21.917129] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:17:40.907 [2024-10-13 11:21:21.917156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:41.475 00:17:41.475 Latency(us) 00:17:41.475 [2024-10-13T11:21:23.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.475 [2024-10-13T11:21:23.077Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:41.475 Verification LBA range: start 0x0 length 0x4000 00:17:41.475 NVMe0n1 : 8.19 1989.54 7.77 15.63 0.00 63876.86 2740.60 7046430.72 00:17:41.475 [2024-10-13T11:21:23.077Z] =================================================================================================================== 00:17:41.475 [2024-10-13T11:21:23.077Z] Total : 1989.54 7.77 15.63 0.00 63876.86 2740.60 7046430.72 00:17:41.475 0 00:17:42.043 11:21:23 -- host/timeout.sh@62 -- # get_controller 00:17:42.043 11:21:23 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:42.043 11:21:23 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:42.301 11:21:23 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:17:42.301 11:21:23 -- host/timeout.sh@63 -- # get_bdev 00:17:42.301 11:21:23 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:42.302 11:21:23 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:42.561 11:21:23 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:17:42.561 11:21:23 -- host/timeout.sh@65 -- # wait 73220 00:17:42.561 11:21:23 -- host/timeout.sh@67 -- # killprocess 73202 00:17:42.561 11:21:23 -- common/autotest_common.sh@926 -- # '[' -z 73202 ']' 00:17:42.561 11:21:23 -- common/autotest_common.sh@930 -- # kill -0 73202 00:17:42.561 11:21:23 -- common/autotest_common.sh@931 -- # uname 00:17:42.561 11:21:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:42.561 11:21:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73202 00:17:42.561 11:21:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:42.561 11:21:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:42.561 killing process with pid 73202 00:17:42.561 11:21:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73202' 00:17:42.561 11:21:24 -- common/autotest_common.sh@945 -- # kill 73202 00:17:42.561 Received shutdown signal, test time was about 9.289141 seconds 00:17:42.561 00:17:42.561 Latency(us) 00:17:42.561 [2024-10-13T11:21:24.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.561 [2024-10-13T11:21:24.163Z] =================================================================================================================== 00:17:42.561 [2024-10-13T11:21:24.163Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.561 11:21:24 -- common/autotest_common.sh@950 -- # wait 73202 00:17:42.820 11:21:24 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.820 [2024-10-13 11:21:24.406277] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.079 11:21:24 -- host/timeout.sh@74 -- # bdevperf_pid=73343 00:17:43.079 11:21:24 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:43.079 11:21:24 -- host/timeout.sh@76 -- # waitforlisten 73343 /var/tmp/bdevperf.sock 00:17:43.079 11:21:24 -- common/autotest_common.sh@819 -- # '[' -z 73343 ']' 00:17:43.079 11:21:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.079 11:21:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:43.079 11:21:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.079 11:21:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:43.079 11:21:24 -- common/autotest_common.sh@10 -- # set +x 00:17:43.079 [2024-10-13 11:21:24.480102] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:43.079 [2024-10-13 11:21:24.480206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73343 ] 00:17:43.079 [2024-10-13 11:21:24.619503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.079 [2024-10-13 11:21:24.678430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.016 11:21:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:44.016 11:21:25 -- common/autotest_common.sh@852 -- # return 0 00:17:44.016 11:21:25 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:44.276 11:21:25 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:17:44.535 NVMe0n1 00:17:44.535 11:21:25 -- host/timeout.sh@84 -- # rpc_pid=73366 00:17:44.535 11:21:25 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:44.535 11:21:25 -- host/timeout.sh@86 -- # sleep 1 00:17:44.535 Running I/O for 10 seconds... 00:17:45.472 11:21:27 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.734 [2024-10-13 11:21:27.225491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e17b0 is same with the state(5) to be set 00:17:45.734 [2024-10-13 11:21:27.225694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.225744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.225764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.225783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.225801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.225820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.225840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.225859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.225877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.225895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.225913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.225932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.225950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.225967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.225986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.225994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.226004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.226012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.226023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.226032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.226042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.734 [2024-10-13 11:21:27.226051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.226061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.734 [2024-10-13 11:21:27.226070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.226080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.226088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.734 [2024-10-13 11:21:27.226098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.734 [2024-10-13 11:21:27.226106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.735 [2024-10-13 11:21:27.226288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.735 [2024-10-13 11:21:27.226307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.735 [2024-10-13 11:21:27.226326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.735 [2024-10-13 11:21:27.226379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.735 [2024-10-13 11:21:27.226418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.735 [2024-10-13 11:21:27.226669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.735 [2024-10-13 11:21:27.226688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.735 [2024-10-13 11:21:27.226812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.735 [2024-10-13 11:21:27.226853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.735 [2024-10-13 11:21:27.226900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.735 [2024-10-13 11:21:27.226920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.226984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.735 [2024-10-13 11:21:27.226995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.735 [2024-10-13 11:21:27.227009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.736 [2024-10-13 11:21:27.227707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.736 [2024-10-13 11:21:27.227867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.736 [2024-10-13 11:21:27.227876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.227886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.227894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.227904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.737 [2024-10-13 11:21:27.227913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.227923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.737 [2024-10-13 11:21:27.227936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.227946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.737 [2024-10-13 11:21:27.227954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.227964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.737 [2024-10-13 11:21:27.227974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.227984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.737 [2024-10-13 11:21:27.227993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.737 [2024-10-13 11:21:27.228032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.737 [2024-10-13 11:21:27.228051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.737 [2024-10-13 11:21:27.228069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.737 [2024-10-13 11:21:27.228199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.737 [2024-10-13 11:21:27.228238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.737 [2024-10-13 11:21:27.228398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9520c0 is same with the state(5) to be set 00:17:45.737 [2024-10-13 11:21:27.228419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:45.737 [2024-10-13 11:21:27.228426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:45.737 [2024-10-13 11:21:27.228434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125344 len:8 PRP1 0x0 PRP2 0x0 00:17:45.737 [2024-10-13 11:21:27.228442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.737 [2024-10-13 11:21:27.228484] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9520c0 was disconnected and freed. reset controller. 00:17:45.737 [2024-10-13 11:21:27.228722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:45.737 [2024-10-13 11:21:27.228794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ef010 (9): Bad file descriptor 00:17:45.737 [2024-10-13 11:21:27.228909] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.737 [2024-10-13 11:21:27.228969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.737 [2024-10-13 11:21:27.229011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.737 [2024-10-13 11:21:27.229027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8ef010 with addr=10.0.0.2, port=4420 00:17:45.737 [2024-10-13 11:21:27.229037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef010 is same with the state(5) to be set 00:17:45.737 [2024-10-13 11:21:27.229054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ef010 (9): Bad file descriptor 00:17:45.737 [2024-10-13 11:21:27.229069] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:45.737 [2024-10-13 11:21:27.229078] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:45.737 [2024-10-13 11:21:27.229088] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:45.737 [2024-10-13 11:21:27.229109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:45.737 [2024-10-13 11:21:27.229120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:45.737 11:21:27 -- host/timeout.sh@90 -- # sleep 1 00:17:46.674 [2024-10-13 11:21:28.229217] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:46.674 [2024-10-13 11:21:28.229311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:46.674 [2024-10-13 11:21:28.229385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:46.674 [2024-10-13 11:21:28.229404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8ef010 with addr=10.0.0.2, port=4420 00:17:46.674 [2024-10-13 11:21:28.229417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef010 is same with the state(5) to be set 00:17:46.674 [2024-10-13 11:21:28.229441] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ef010 (9): Bad file descriptor 00:17:46.674 [2024-10-13 11:21:28.229458] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:46.674 [2024-10-13 11:21:28.229467] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:46.674 [2024-10-13 11:21:28.229477] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:46.674 [2024-10-13 11:21:28.229503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:46.674 [2024-10-13 11:21:28.229514] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:46.674 11:21:28 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.932 [2024-10-13 11:21:28.483861] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.932 11:21:28 -- host/timeout.sh@92 -- # wait 73366 00:17:47.869 [2024-10-13 11:21:29.245995] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:56.005 00:17:56.005 Latency(us) 00:17:56.005 [2024-10-13T11:21:37.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.005 [2024-10-13T11:21:37.607Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:56.005 Verification LBA range: start 0x0 length 0x4000 00:17:56.005 NVMe0n1 : 10.01 9815.66 38.34 0.00 0.00 13019.26 845.27 3019898.88 00:17:56.005 [2024-10-13T11:21:37.607Z] =================================================================================================================== 00:17:56.005 [2024-10-13T11:21:37.607Z] Total : 9815.66 38.34 0.00 0.00 13019.26 845.27 3019898.88 00:17:56.005 0 00:17:56.005 11:21:36 -- host/timeout.sh@97 -- # rpc_pid=73471 00:17:56.005 11:21:36 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:56.005 11:21:36 -- host/timeout.sh@98 -- # sleep 1 00:17:56.005 Running I/O for 10 seconds... 00:17:56.005 11:21:37 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.005 [2024-10-13 11:21:37.384621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.005 [2024-10-13 11:21:37.384683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.005 [2024-10-13 11:21:37.384711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.005 [2024-10-13 11:21:37.384719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.005 [2024-10-13 11:21:37.384726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.005 [2024-10-13 11:21:37.384734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.005 [2024-10-13 11:21:37.384741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.005 [2024-10-13 11:21:37.384749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e04a0 is same with the state(5) to be set 00:17:56.006 [2024-10-13 11:21:37.384980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.006 [2024-10-13 11:21:37.385574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.006 [2024-10-13 11:21:37.385614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.006 [2024-10-13 11:21:37.385634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.006 [2024-10-13 11:21:37.385654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.006 [2024-10-13 11:21:37.385675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.006 [2024-10-13 11:21:37.385695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.006 [2024-10-13 11:21:37.385755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.006 [2024-10-13 11:21:37.385786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.006 [2024-10-13 11:21:37.385794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.385805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.385814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.385825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.385834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.385844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.385853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.385864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.385873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.385884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.385892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.385903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.385913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.385923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.385932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.385943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.385952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.385963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.385972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.385983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.385992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.007 [2024-10-13 11:21:37.386051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.007 [2024-10-13 11:21:37.386112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.007 [2024-10-13 11:21:37.386191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.007 [2024-10-13 11:21:37.386230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.007 [2024-10-13 11:21:37.386250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.007 [2024-10-13 11:21:37.386521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.007 [2024-10-13 11:21:37.386541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.007 [2024-10-13 11:21:37.386767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.007 [2024-10-13 11:21:37.386808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-10-13 11:21:37.386849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.007 [2024-10-13 11:21:37.386860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.008 [2024-10-13 11:21:37.386870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.386881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.008 [2024-10-13 11:21:37.386891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.386902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.008 [2024-10-13 11:21:37.386911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.386923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.386932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.386943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.008 [2024-10-13 11:21:37.386953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.386963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.008 [2024-10-13 11:21:37.386973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.386984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.386993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.008 [2024-10-13 11:21:37.387279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.008 [2024-10-13 11:21:37.387368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.008 [2024-10-13 11:21:37.387389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.008 [2024-10-13 11:21:37.387409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.008 [2024-10-13 11:21:37.387449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.008 [2024-10-13 11:21:37.387469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.008 [2024-10-13 11:21:37.387488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:56.008 [2024-10-13 11:21:37.387570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-10-13 11:21:37.387706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965cc0 is same with the state(5) to be set 00:17:56.008 [2024-10-13 11:21:37.387729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:56.008 [2024-10-13 11:21:37.387736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:56.008 [2024-10-13 11:21:37.387744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126400 len:8 PRP1 0x0 PRP2 0x0 00:17:56.008 [2024-10-13 11:21:37.387753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387794] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x965cc0 was disconnected and freed. reset controller. 00:17:56.008 [2024-10-13 11:21:37.387865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.008 [2024-10-13 11:21:37.387881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.008 [2024-10-13 11:21:37.387900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.008 [2024-10-13 11:21:37.387918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.008 [2024-10-13 11:21:37.387936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.008 [2024-10-13 11:21:37.387946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef010 is same with the state(5) to be set 00:17:56.008 [2024-10-13 11:21:37.388162] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:56.008 [2024-10-13 11:21:37.388182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ef010 (9): Bad file descriptor 00:17:56.008 [2024-10-13 11:21:37.388288] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:56.008 [2024-10-13 11:21:37.388375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:56.008 [2024-10-13 11:21:37.388424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:56.008 [2024-10-13 11:21:37.388441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8ef010 with addr=10.0.0.2, port=4420 00:17:56.008 [2024-10-13 11:21:37.388452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef010 is same with the state(5) to be set 00:17:56.008 [2024-10-13 11:21:37.388471] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ef010 (9): Bad file descriptor 00:17:56.008 [2024-10-13 11:21:37.388488] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:56.008 [2024-10-13 11:21:37.388497] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:56.009 [2024-10-13 11:21:37.388508] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:56.009 [2024-10-13 11:21:37.388528] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:56.009 [2024-10-13 11:21:37.388539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:56.009 11:21:37 -- host/timeout.sh@101 -- # sleep 3 00:17:56.942 [2024-10-13 11:21:38.388653] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:56.942 [2024-10-13 11:21:38.388788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:56.942 [2024-10-13 11:21:38.388834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:56.942 [2024-10-13 11:21:38.388852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8ef010 with addr=10.0.0.2, port=4420 00:17:56.942 [2024-10-13 11:21:38.388866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef010 is same with the state(5) to be set 00:17:56.942 [2024-10-13 11:21:38.388890] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ef010 (9): Bad file descriptor 00:17:56.942 [2024-10-13 11:21:38.388915] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:56.942 [2024-10-13 11:21:38.388925] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:56.942 [2024-10-13 11:21:38.388935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:56.943 [2024-10-13 11:21:38.388979] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:56.943 [2024-10-13 11:21:38.388991] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:57.878 [2024-10-13 11:21:39.389105] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.878 [2024-10-13 11:21:39.389233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.878 [2024-10-13 11:21:39.389278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.878 [2024-10-13 11:21:39.389294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8ef010 with addr=10.0.0.2, port=4420 00:17:57.878 [2024-10-13 11:21:39.389307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef010 is same with the state(5) to be set 00:17:57.878 [2024-10-13 11:21:39.389346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ef010 (9): Bad file descriptor 00:17:57.878 [2024-10-13 11:21:39.389381] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:57.878 [2024-10-13 11:21:39.389404] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:57.878 [2024-10-13 11:21:39.389414] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:57.878 [2024-10-13 11:21:39.389443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:57.878 [2024-10-13 11:21:39.389455] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:58.815 [2024-10-13 11:21:40.391446] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:58.815 [2024-10-13 11:21:40.391542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:58.815 [2024-10-13 11:21:40.391591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:58.815 [2024-10-13 11:21:40.391609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8ef010 with addr=10.0.0.2, port=4420 00:17:58.815 [2024-10-13 11:21:40.391623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef010 is same with the state(5) to be set 00:17:58.815 [2024-10-13 11:21:40.391841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ef010 (9): Bad file descriptor 00:17:58.815 [2024-10-13 11:21:40.392051] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:58.815 [2024-10-13 11:21:40.392063] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:58.815 [2024-10-13 11:21:40.392073] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:58.815 [2024-10-13 11:21:40.394676] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:58.815 [2024-10-13 11:21:40.394705] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:59.074 11:21:40 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.074 [2024-10-13 11:21:40.656228] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.333 11:21:40 -- host/timeout.sh@103 -- # wait 73471 00:17:59.914 [2024-10-13 11:21:41.426974] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:05.211 00:18:05.211 Latency(us) 00:18:05.211 [2024-10-13T11:21:46.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.211 [2024-10-13T11:21:46.813Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:05.211 Verification LBA range: start 0x0 length 0x4000 00:18:05.211 NVMe0n1 : 10.01 8383.76 32.75 6182.33 0.00 8771.24 577.16 3019898.88 00:18:05.211 [2024-10-13T11:21:46.813Z] =================================================================================================================== 00:18:05.211 [2024-10-13T11:21:46.813Z] Total : 8383.76 32.75 6182.33 0.00 8771.24 0.00 3019898.88 00:18:05.211 0 00:18:05.211 11:21:46 -- host/timeout.sh@105 -- # killprocess 73343 00:18:05.211 11:21:46 -- common/autotest_common.sh@926 -- # '[' -z 73343 ']' 00:18:05.211 11:21:46 -- common/autotest_common.sh@930 -- # kill -0 73343 00:18:05.211 11:21:46 -- common/autotest_common.sh@931 -- # uname 00:18:05.211 11:21:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:05.211 11:21:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73343 00:18:05.211 killing process with pid 73343 00:18:05.211 Received shutdown signal, test time was about 10.000000 seconds 00:18:05.211 00:18:05.211 Latency(us) 00:18:05.211 [2024-10-13T11:21:46.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.211 [2024-10-13T11:21:46.813Z] =================================================================================================================== 00:18:05.211 [2024-10-13T11:21:46.813Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.211 11:21:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:05.211 11:21:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:05.211 11:21:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73343' 00:18:05.211 11:21:46 -- common/autotest_common.sh@945 -- # kill 73343 00:18:05.211 11:21:46 -- common/autotest_common.sh@950 -- # wait 73343 00:18:05.211 11:21:46 -- host/timeout.sh@110 -- # bdevperf_pid=73585 00:18:05.211 11:21:46 -- host/timeout.sh@112 -- # waitforlisten 73585 /var/tmp/bdevperf.sock 00:18:05.211 11:21:46 -- common/autotest_common.sh@819 -- # '[' -z 73585 ']' 00:18:05.211 11:21:46 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:05.211 11:21:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.211 11:21:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:05.211 11:21:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.211 11:21:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:05.211 11:21:46 -- common/autotest_common.sh@10 -- # set +x 00:18:05.211 [2024-10-13 11:21:46.542685] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:05.211 [2024-10-13 11:21:46.542795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73585 ] 00:18:05.211 [2024-10-13 11:21:46.680668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.211 [2024-10-13 11:21:46.734872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.147 11:21:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:06.148 11:21:47 -- common/autotest_common.sh@852 -- # return 0 00:18:06.148 11:21:47 -- host/timeout.sh@116 -- # dtrace_pid=73601 00:18:06.148 11:21:47 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 73585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:06.148 11:21:47 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:06.406 11:21:47 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:06.664 NVMe0n1 00:18:06.664 11:21:48 -- host/timeout.sh@124 -- # rpc_pid=73647 00:18:06.664 11:21:48 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:06.664 11:21:48 -- host/timeout.sh@125 -- # sleep 1 00:18:06.664 Running I/O for 10 seconds... 00:18:07.601 11:21:49 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:07.862 [2024-10-13 11:21:49.406281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.862 [2024-10-13 11:21:49.406584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.406830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.862 [2024-10-13 11:21:49.406944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.406961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.862 [2024-10-13 11:21:49.406982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.406993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.862 [2024-10-13 11:21:49.407002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154e010 is same with the state(5) to be set 00:18:07.862 [2024-10-13 11:21:49.407295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.862 [2024-10-13 11:21:49.407643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.862 [2024-10-13 11:21:49.407654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.407984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.407993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.863 [2024-10-13 11:21:49.408503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.863 [2024-10-13 11:21:49.408511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.408991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.408999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.864 [2024-10-13 11:21:49.409325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.864 [2024-10-13 11:21:49.409333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.409359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.409742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.410020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.410178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.410366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.410539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.410668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.410856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.410982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.411184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.411363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.411532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.411813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.411937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.865 [2024-10-13 11:21:49.412840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b10c0 is same with the state(5) to be set 00:18:07.865 [2024-10-13 11:21:49.412862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.865 [2024-10-13 11:21:49.412870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.865 [2024-10-13 11:21:49.412878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49960 len:8 PRP1 0x0 PRP2 0x0 00:18:07.865 [2024-10-13 11:21:49.412887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.865 [2024-10-13 11:21:49.412930] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15b10c0 was disconnected and freed. reset controller. 00:18:07.865 [2024-10-13 11:21:49.413204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:07.865 [2024-10-13 11:21:49.413230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154e010 (9): Bad file descriptor 00:18:07.865 [2024-10-13 11:21:49.413361] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.865 [2024-10-13 11:21:49.413430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.865 [2024-10-13 11:21:49.413474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.865 [2024-10-13 11:21:49.413491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154e010 with addr=10.0.0.2, port=4420 00:18:07.865 [2024-10-13 11:21:49.413502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154e010 is same with the state(5) to be set 00:18:07.865 [2024-10-13 11:21:49.413521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154e010 (9): Bad file descriptor 00:18:07.865 [2024-10-13 11:21:49.413538] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:07.865 [2024-10-13 11:21:49.413548] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:07.865 [2024-10-13 11:21:49.413558] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:07.865 [2024-10-13 11:21:49.413581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:07.865 [2024-10-13 11:21:49.413592] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:07.865 11:21:49 -- host/timeout.sh@128 -- # wait 73647 00:18:10.399 [2024-10-13 11:21:51.413777] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.399 [2024-10-13 11:21:51.414127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.399 [2024-10-13 11:21:51.414225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.399 [2024-10-13 11:21:51.414382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154e010 with addr=10.0.0.2, port=4420 00:18:10.399 [2024-10-13 11:21:51.414524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154e010 is same with the state(5) to be set 00:18:10.399 [2024-10-13 11:21:51.414681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154e010 (9): Bad file descriptor 00:18:10.399 [2024-10-13 11:21:51.414888] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:10.399 [2024-10-13 11:21:51.415043] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:10.399 [2024-10-13 11:21:51.415189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:10.399 [2024-10-13 11:21:51.415250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:10.399 [2024-10-13 11:21:51.415359] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:12.305 [2024-10-13 11:21:53.415593] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.305 [2024-10-13 11:21:53.415916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.305 [2024-10-13 11:21:53.416092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.305 [2024-10-13 11:21:53.416152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154e010 with addr=10.0.0.2, port=4420 00:18:12.305 [2024-10-13 11:21:53.416422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154e010 is same with the state(5) to be set 00:18:12.305 [2024-10-13 11:21:53.416613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154e010 (9): Bad file descriptor 00:18:12.305 [2024-10-13 11:21:53.416865] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:12.305 [2024-10-13 11:21:53.417004] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:12.305 [2024-10-13 11:21:53.417143] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:12.305 [2024-10-13 11:21:53.417208] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.305 [2024-10-13 11:21:53.417317] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:14.206 [2024-10-13 11:21:55.417535] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:14.206 [2024-10-13 11:21:55.417785] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:14.206 [2024-10-13 11:21:55.417806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:14.206 [2024-10-13 11:21:55.417817] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:14.206 [2024-10-13 11:21:55.417852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:15.141 00:18:15.141 Latency(us) 00:18:15.141 [2024-10-13T11:21:56.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.141 [2024-10-13T11:21:56.743Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:15.141 NVMe0n1 : 8.22 2349.42 9.18 15.58 0.00 54059.77 7089.80 7046430.72 00:18:15.141 [2024-10-13T11:21:56.743Z] =================================================================================================================== 00:18:15.141 [2024-10-13T11:21:56.743Z] Total : 2349.42 9.18 15.58 0.00 54059.77 7089.80 7046430.72 00:18:15.141 0 00:18:15.141 11:21:56 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:15.141 Attaching 5 probes... 00:18:15.141 1399.791815: reset bdev controller NVMe0 00:18:15.141 1399.860263: reconnect bdev controller NVMe0 00:18:15.141 3400.222873: reconnect delay bdev controller NVMe0 00:18:15.141 3400.260220: reconnect bdev controller NVMe0 00:18:15.141 5402.039053: reconnect delay bdev controller NVMe0 00:18:15.141 5402.074985: reconnect bdev controller NVMe0 00:18:15.141 7404.090087: reconnect delay bdev controller NVMe0 00:18:15.141 7404.123430: reconnect bdev controller NVMe0 00:18:15.141 11:21:56 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:15.141 11:21:56 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:15.141 11:21:56 -- host/timeout.sh@136 -- # kill 73601 00:18:15.141 11:21:56 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:15.141 11:21:56 -- host/timeout.sh@139 -- # killprocess 73585 00:18:15.141 11:21:56 -- common/autotest_common.sh@926 -- # '[' -z 73585 ']' 00:18:15.141 11:21:56 -- common/autotest_common.sh@930 -- # kill -0 73585 00:18:15.141 11:21:56 -- common/autotest_common.sh@931 -- # uname 00:18:15.141 11:21:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:15.141 11:21:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73585 00:18:15.141 killing process with pid 73585 00:18:15.141 Received shutdown signal, test time was about 8.281642 seconds 00:18:15.141 00:18:15.141 Latency(us) 00:18:15.141 [2024-10-13T11:21:56.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.141 [2024-10-13T11:21:56.743Z] =================================================================================================================== 00:18:15.141 [2024-10-13T11:21:56.743Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.141 11:21:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:15.141 11:21:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:15.141 11:21:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73585' 00:18:15.141 11:21:56 -- common/autotest_common.sh@945 -- # kill 73585 00:18:15.141 11:21:56 -- common/autotest_common.sh@950 -- # wait 73585 00:18:15.141 11:21:56 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.400 11:21:56 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:15.400 11:21:56 -- host/timeout.sh@145 -- # nvmftestfini 00:18:15.400 11:21:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:15.400 11:21:56 -- nvmf/common.sh@116 -- # sync 00:18:15.400 11:21:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:15.400 11:21:56 -- nvmf/common.sh@119 -- # set +e 00:18:15.400 11:21:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:15.400 11:21:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:15.400 rmmod nvme_tcp 00:18:15.400 rmmod nvme_fabrics 00:18:15.400 rmmod nvme_keyring 00:18:15.400 11:21:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:15.400 11:21:56 -- nvmf/common.sh@123 -- # set -e 00:18:15.400 11:21:56 -- nvmf/common.sh@124 -- # return 0 00:18:15.400 11:21:56 -- nvmf/common.sh@477 -- # '[' -n 73142 ']' 00:18:15.400 11:21:56 -- nvmf/common.sh@478 -- # killprocess 73142 00:18:15.400 11:21:56 -- common/autotest_common.sh@926 -- # '[' -z 73142 ']' 00:18:15.400 11:21:56 -- common/autotest_common.sh@930 -- # kill -0 73142 00:18:15.658 11:21:56 -- common/autotest_common.sh@931 -- # uname 00:18:15.658 11:21:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:15.658 11:21:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73142 00:18:15.658 killing process with pid 73142 00:18:15.658 11:21:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:15.658 11:21:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:15.658 11:21:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73142' 00:18:15.658 11:21:57 -- common/autotest_common.sh@945 -- # kill 73142 00:18:15.658 11:21:57 -- common/autotest_common.sh@950 -- # wait 73142 00:18:15.659 11:21:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:15.659 11:21:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:15.659 11:21:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:15.659 11:21:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.659 11:21:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:15.659 11:21:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.659 11:21:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.659 11:21:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.659 11:21:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:15.917 ************************************ 00:18:15.917 END TEST nvmf_timeout 00:18:15.917 ************************************ 00:18:15.917 00:18:15.917 real 0m47.082s 00:18:15.917 user 2m18.913s 00:18:15.917 sys 0m5.199s 00:18:15.917 11:21:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.917 11:21:57 -- common/autotest_common.sh@10 -- # set +x 00:18:15.917 11:21:57 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:18:15.917 11:21:57 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:18:15.917 11:21:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:15.917 11:21:57 -- common/autotest_common.sh@10 -- # set +x 00:18:15.917 11:21:57 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:18:15.917 00:18:15.917 real 10m28.490s 00:18:15.917 user 29m23.664s 00:18:15.917 sys 3m23.647s 00:18:15.917 11:21:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.917 ************************************ 00:18:15.917 END TEST nvmf_tcp 00:18:15.917 ************************************ 00:18:15.917 11:21:57 -- common/autotest_common.sh@10 -- # set +x 00:18:15.917 11:21:57 -- spdk/autotest.sh@296 -- # [[ 1 -eq 0 ]] 00:18:15.917 11:21:57 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:15.917 11:21:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:15.917 11:21:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:15.917 11:21:57 -- common/autotest_common.sh@10 -- # set +x 00:18:15.917 ************************************ 00:18:15.917 START TEST nvmf_dif 00:18:15.917 ************************************ 00:18:15.917 11:21:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:15.917 * Looking for test storage... 00:18:15.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:15.918 11:21:57 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:15.918 11:21:57 -- nvmf/common.sh@7 -- # uname -s 00:18:15.918 11:21:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.918 11:21:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.918 11:21:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.918 11:21:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.918 11:21:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.918 11:21:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.918 11:21:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.918 11:21:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.918 11:21:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.918 11:21:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.918 11:21:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:18:15.918 11:21:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:18:15.918 11:21:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.918 11:21:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.918 11:21:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:15.918 11:21:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:15.918 11:21:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.918 11:21:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.918 11:21:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.918 11:21:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.918 11:21:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.918 11:21:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.918 11:21:57 -- paths/export.sh@5 -- # export PATH 00:18:15.918 11:21:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.918 11:21:57 -- nvmf/common.sh@46 -- # : 0 00:18:15.918 11:21:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:15.918 11:21:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:15.918 11:21:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:15.918 11:21:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.918 11:21:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.918 11:21:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:15.918 11:21:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:15.918 11:21:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:15.918 11:21:57 -- target/dif.sh@15 -- # NULL_META=16 00:18:15.918 11:21:57 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:15.918 11:21:57 -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:15.918 11:21:57 -- target/dif.sh@15 -- # NULL_DIF=1 00:18:15.918 11:21:57 -- target/dif.sh@135 -- # nvmftestinit 00:18:15.918 11:21:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:15.918 11:21:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.918 11:21:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:15.918 11:21:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:15.918 11:21:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:15.918 11:21:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.918 11:21:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:15.918 11:21:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.918 11:21:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:15.918 11:21:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:15.918 11:21:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:15.918 11:21:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:15.918 11:21:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:15.918 11:21:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:15.918 11:21:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.918 11:21:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.918 11:21:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:15.918 11:21:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:15.918 11:21:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:15.918 11:21:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:15.918 11:21:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:15.918 11:21:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.918 11:21:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:15.918 11:21:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:15.918 11:21:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:15.918 11:21:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:15.918 11:21:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:16.176 11:21:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:16.176 Cannot find device "nvmf_tgt_br" 00:18:16.176 11:21:57 -- nvmf/common.sh@154 -- # true 00:18:16.176 11:21:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.177 Cannot find device "nvmf_tgt_br2" 00:18:16.177 11:21:57 -- nvmf/common.sh@155 -- # true 00:18:16.177 11:21:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:16.177 11:21:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:16.177 Cannot find device "nvmf_tgt_br" 00:18:16.177 11:21:57 -- nvmf/common.sh@157 -- # true 00:18:16.177 11:21:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:16.177 Cannot find device "nvmf_tgt_br2" 00:18:16.177 11:21:57 -- nvmf/common.sh@158 -- # true 00:18:16.177 11:21:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:16.177 11:21:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:16.177 11:21:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.177 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.177 11:21:57 -- nvmf/common.sh@161 -- # true 00:18:16.177 11:21:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.177 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.177 11:21:57 -- nvmf/common.sh@162 -- # true 00:18:16.177 11:21:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:16.177 11:21:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:16.177 11:21:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:16.177 11:21:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:16.177 11:21:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:16.177 11:21:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:16.177 11:21:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:16.177 11:21:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:16.177 11:21:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:16.177 11:21:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:16.177 11:21:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:16.177 11:21:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:16.177 11:21:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:16.177 11:21:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:16.177 11:21:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:16.177 11:21:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:16.177 11:21:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:16.177 11:21:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:16.177 11:21:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:16.435 11:21:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:16.435 11:21:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:16.435 11:21:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:16.435 11:21:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:16.435 11:21:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:16.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:18:16.435 00:18:16.435 --- 10.0.0.2 ping statistics --- 00:18:16.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.435 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:16.435 11:21:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:16.435 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:16.435 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:18:16.435 00:18:16.435 --- 10.0.0.3 ping statistics --- 00:18:16.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.435 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:16.435 11:21:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:16.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:16.435 00:18:16.435 --- 10.0.0.1 ping statistics --- 00:18:16.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.435 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:16.435 11:21:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.435 11:21:57 -- nvmf/common.sh@421 -- # return 0 00:18:16.435 11:21:57 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:18:16.435 11:21:57 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:16.694 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:16.694 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:16.694 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:16.694 11:21:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.694 11:21:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:16.694 11:21:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:16.694 11:21:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.694 11:21:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:16.694 11:21:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:16.694 11:21:58 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:16.694 11:21:58 -- target/dif.sh@137 -- # nvmfappstart 00:18:16.694 11:21:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:16.694 11:21:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:16.694 11:21:58 -- common/autotest_common.sh@10 -- # set +x 00:18:16.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.694 11:21:58 -- nvmf/common.sh@469 -- # nvmfpid=74082 00:18:16.694 11:21:58 -- nvmf/common.sh@470 -- # waitforlisten 74082 00:18:16.694 11:21:58 -- common/autotest_common.sh@819 -- # '[' -z 74082 ']' 00:18:16.694 11:21:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:16.694 11:21:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.694 11:21:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:16.694 11:21:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.694 11:21:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:16.694 11:21:58 -- common/autotest_common.sh@10 -- # set +x 00:18:16.952 [2024-10-13 11:21:58.323430] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:16.952 [2024-10-13 11:21:58.323529] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.952 [2024-10-13 11:21:58.465583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.952 [2024-10-13 11:21:58.532821] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:16.952 [2024-10-13 11:21:58.532986] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.952 [2024-10-13 11:21:58.533002] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.952 [2024-10-13 11:21:58.533013] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.952 [2024-10-13 11:21:58.533055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.888 11:21:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:17.888 11:21:59 -- common/autotest_common.sh@852 -- # return 0 00:18:17.888 11:21:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:17.888 11:21:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:17.888 11:21:59 -- common/autotest_common.sh@10 -- # set +x 00:18:17.888 11:21:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.888 11:21:59 -- target/dif.sh@139 -- # create_transport 00:18:17.888 11:21:59 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:17.888 11:21:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:17.888 11:21:59 -- common/autotest_common.sh@10 -- # set +x 00:18:17.888 [2024-10-13 11:21:59.394597] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.888 11:21:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:17.888 11:21:59 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:17.888 11:21:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:17.888 11:21:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:17.888 11:21:59 -- common/autotest_common.sh@10 -- # set +x 00:18:17.888 ************************************ 00:18:17.888 START TEST fio_dif_1_default 00:18:17.888 ************************************ 00:18:17.888 11:21:59 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:18:17.888 11:21:59 -- target/dif.sh@86 -- # create_subsystems 0 00:18:17.888 11:21:59 -- target/dif.sh@28 -- # local sub 00:18:17.888 11:21:59 -- target/dif.sh@30 -- # for sub in "$@" 00:18:17.888 11:21:59 -- target/dif.sh@31 -- # create_subsystem 0 00:18:17.888 11:21:59 -- target/dif.sh@18 -- # local sub_id=0 00:18:17.888 11:21:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:17.888 11:21:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:17.888 11:21:59 -- common/autotest_common.sh@10 -- # set +x 00:18:17.888 bdev_null0 00:18:17.888 11:21:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:17.888 11:21:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:17.888 11:21:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:17.889 11:21:59 -- common/autotest_common.sh@10 -- # set +x 00:18:17.889 11:21:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:17.889 11:21:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:17.889 11:21:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:17.889 11:21:59 -- common/autotest_common.sh@10 -- # set +x 00:18:17.889 11:21:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:17.889 11:21:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:17.889 11:21:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:17.889 11:21:59 -- common/autotest_common.sh@10 -- # set +x 00:18:17.889 [2024-10-13 11:21:59.442732] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.889 11:21:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:17.889 11:21:59 -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:17.889 11:21:59 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:17.889 11:21:59 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:17.889 11:21:59 -- nvmf/common.sh@520 -- # config=() 00:18:17.889 11:21:59 -- nvmf/common.sh@520 -- # local subsystem config 00:18:17.889 11:21:59 -- target/dif.sh@82 -- # gen_fio_conf 00:18:17.889 11:21:59 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:17.889 11:21:59 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:17.889 11:21:59 -- target/dif.sh@54 -- # local file 00:18:17.889 11:21:59 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:17.889 11:21:59 -- target/dif.sh@56 -- # cat 00:18:17.889 11:21:59 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:17.889 11:21:59 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:17.889 11:21:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:17.889 11:21:59 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.889 11:21:59 -- common/autotest_common.sh@1320 -- # shift 00:18:17.889 11:21:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:17.889 { 00:18:17.889 "params": { 00:18:17.889 "name": "Nvme$subsystem", 00:18:17.889 "trtype": "$TEST_TRANSPORT", 00:18:17.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:17.889 "adrfam": "ipv4", 00:18:17.889 "trsvcid": "$NVMF_PORT", 00:18:17.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:17.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:17.889 "hdgst": ${hdgst:-false}, 00:18:17.889 "ddgst": ${ddgst:-false} 00:18:17.889 }, 00:18:17.889 "method": "bdev_nvme_attach_controller" 00:18:17.889 } 00:18:17.889 EOF 00:18:17.889 )") 00:18:17.889 11:21:59 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:17.889 11:21:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:17.889 11:21:59 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:17.889 11:21:59 -- nvmf/common.sh@542 -- # cat 00:18:17.889 11:21:59 -- target/dif.sh@72 -- # (( file <= files )) 00:18:17.889 11:21:59 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.889 11:21:59 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:17.889 11:21:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:17.889 11:21:59 -- nvmf/common.sh@544 -- # jq . 00:18:17.889 11:21:59 -- nvmf/common.sh@545 -- # IFS=, 00:18:17.889 11:21:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:17.889 "params": { 00:18:17.889 "name": "Nvme0", 00:18:17.889 "trtype": "tcp", 00:18:17.889 "traddr": "10.0.0.2", 00:18:17.889 "adrfam": "ipv4", 00:18:17.889 "trsvcid": "4420", 00:18:17.889 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:17.889 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:17.889 "hdgst": false, 00:18:17.889 "ddgst": false 00:18:17.889 }, 00:18:17.889 "method": "bdev_nvme_attach_controller" 00:18:17.889 }' 00:18:17.889 11:21:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:17.889 11:21:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:17.889 11:21:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:17.889 11:21:59 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.889 11:21:59 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:18:17.889 11:21:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:18.147 11:21:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:18.147 11:21:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:18.147 11:21:59 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:18.147 11:21:59 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:18.147 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:18.147 fio-3.35 00:18:18.147 Starting 1 thread 00:18:18.406 [2024-10-13 11:21:59.990672] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:18.406 [2024-10-13 11:21:59.991407] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:30.619 00:18:30.619 filename0: (groupid=0, jobs=1): err= 0: pid=74153: Sun Oct 13 11:22:10 2024 00:18:30.619 read: IOPS=9234, BW=36.1MiB/s (37.8MB/s)(361MiB/10001msec) 00:18:30.619 slat (nsec): min=5770, max=66836, avg=8096.59, stdev=3590.32 00:18:30.619 clat (usec): min=313, max=3657, avg=409.55, stdev=54.86 00:18:30.619 lat (usec): min=319, max=3689, avg=417.64, stdev=55.70 00:18:30.619 clat percentiles (usec): 00:18:30.619 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 367], 00:18:30.619 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 412], 00:18:30.619 | 70.00th=[ 429], 80.00th=[ 449], 90.00th=[ 478], 95.00th=[ 506], 00:18:30.619 | 99.00th=[ 553], 99.50th=[ 586], 99.90th=[ 660], 99.95th=[ 685], 00:18:30.619 | 99.99th=[ 742] 00:18:30.619 bw ( KiB/s): min=34842, max=39040, per=100.00%, avg=36963.05, stdev=1081.88, samples=19 00:18:30.619 iops : min= 8710, max= 9760, avg=9240.74, stdev=270.52, samples=19 00:18:30.619 lat (usec) : 500=94.37%, 750=5.62% 00:18:30.619 lat (msec) : 2=0.01%, 4=0.01% 00:18:30.619 cpu : usr=86.34%, sys=11.93%, ctx=13, majf=0, minf=9 00:18:30.619 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:30.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.619 issued rwts: total=92352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.619 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:30.619 00:18:30.619 Run status group 0 (all jobs): 00:18:30.619 READ: bw=36.1MiB/s (37.8MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.8MB/s), io=361MiB (378MB), run=10001-10001msec 00:18:30.619 11:22:10 -- target/dif.sh@88 -- # destroy_subsystems 0 00:18:30.619 11:22:10 -- target/dif.sh@43 -- # local sub 00:18:30.619 11:22:10 -- target/dif.sh@45 -- # for sub in "$@" 00:18:30.619 11:22:10 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:30.619 11:22:10 -- target/dif.sh@36 -- # local sub_id=0 00:18:30.619 11:22:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:30.619 11:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:30.619 11:22:10 -- common/autotest_common.sh@10 -- # set +x 00:18:30.619 11:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:30.619 11:22:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:30.619 11:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:30.619 11:22:10 -- common/autotest_common.sh@10 -- # set +x 00:18:30.619 ************************************ 00:18:30.619 END TEST fio_dif_1_default 00:18:30.619 ************************************ 00:18:30.619 11:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:30.619 00:18:30.619 real 0m10.897s 00:18:30.619 user 0m9.198s 00:18:30.619 sys 0m1.433s 00:18:30.619 11:22:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:30.619 11:22:10 -- common/autotest_common.sh@10 -- # set +x 00:18:30.619 11:22:10 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:18:30.619 11:22:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:30.619 11:22:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:30.619 11:22:10 -- common/autotest_common.sh@10 -- # set +x 00:18:30.619 ************************************ 00:18:30.619 START TEST fio_dif_1_multi_subsystems 00:18:30.619 ************************************ 00:18:30.619 11:22:10 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:18:30.619 11:22:10 -- target/dif.sh@92 -- # local files=1 00:18:30.619 11:22:10 -- target/dif.sh@94 -- # create_subsystems 0 1 00:18:30.619 11:22:10 -- target/dif.sh@28 -- # local sub 00:18:30.619 11:22:10 -- target/dif.sh@30 -- # for sub in "$@" 00:18:30.619 11:22:10 -- target/dif.sh@31 -- # create_subsystem 0 00:18:30.619 11:22:10 -- target/dif.sh@18 -- # local sub_id=0 00:18:30.619 11:22:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:30.619 11:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:30.619 11:22:10 -- common/autotest_common.sh@10 -- # set +x 00:18:30.619 bdev_null0 00:18:30.619 11:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:30.619 11:22:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:30.619 11:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:30.619 11:22:10 -- common/autotest_common.sh@10 -- # set +x 00:18:30.619 11:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:30.619 11:22:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:30.619 11:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:30.619 11:22:10 -- common/autotest_common.sh@10 -- # set +x 00:18:30.619 11:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:30.619 11:22:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:30.619 11:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:30.619 11:22:10 -- common/autotest_common.sh@10 -- # set +x 00:18:30.619 [2024-10-13 11:22:10.395010] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.619 11:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:30.619 11:22:10 -- target/dif.sh@30 -- # for sub in "$@" 00:18:30.619 11:22:10 -- target/dif.sh@31 -- # create_subsystem 1 00:18:30.619 11:22:10 -- target/dif.sh@18 -- # local sub_id=1 00:18:30.619 11:22:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:30.619 11:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:30.619 11:22:10 -- common/autotest_common.sh@10 -- # set +x 00:18:30.619 bdev_null1 00:18:30.620 11:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:30.620 11:22:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:30.620 11:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:30.620 11:22:10 -- common/autotest_common.sh@10 -- # set +x 00:18:30.620 11:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:30.620 11:22:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:30.620 11:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:30.620 11:22:10 -- common/autotest_common.sh@10 -- # set +x 00:18:30.620 11:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:30.620 11:22:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.620 11:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:30.620 11:22:10 -- common/autotest_common.sh@10 -- # set +x 00:18:30.620 11:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:30.620 11:22:10 -- target/dif.sh@95 -- # fio /dev/fd/62 00:18:30.620 11:22:10 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:18:30.620 11:22:10 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:30.620 11:22:10 -- nvmf/common.sh@520 -- # config=() 00:18:30.620 11:22:10 -- nvmf/common.sh@520 -- # local subsystem config 00:18:30.620 11:22:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:30.620 11:22:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:30.620 { 00:18:30.620 "params": { 00:18:30.620 "name": "Nvme$subsystem", 00:18:30.620 "trtype": "$TEST_TRANSPORT", 00:18:30.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.620 "adrfam": "ipv4", 00:18:30.620 "trsvcid": "$NVMF_PORT", 00:18:30.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.620 "hdgst": ${hdgst:-false}, 00:18:30.620 "ddgst": ${ddgst:-false} 00:18:30.620 }, 00:18:30.620 "method": "bdev_nvme_attach_controller" 00:18:30.620 } 00:18:30.620 EOF 00:18:30.620 )") 00:18:30.620 11:22:10 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:30.620 11:22:10 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:30.620 11:22:10 -- target/dif.sh@82 -- # gen_fio_conf 00:18:30.620 11:22:10 -- target/dif.sh@54 -- # local file 00:18:30.620 11:22:10 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:30.620 11:22:10 -- nvmf/common.sh@542 -- # cat 00:18:30.620 11:22:10 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:30.620 11:22:10 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:30.620 11:22:10 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:30.620 11:22:10 -- target/dif.sh@56 -- # cat 00:18:30.620 11:22:10 -- common/autotest_common.sh@1320 -- # shift 00:18:30.620 11:22:10 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:30.620 11:22:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:30.620 11:22:10 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:30.620 11:22:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:30.620 11:22:10 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:30.620 11:22:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:30.620 { 00:18:30.620 "params": { 00:18:30.620 "name": "Nvme$subsystem", 00:18:30.620 "trtype": "$TEST_TRANSPORT", 00:18:30.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.620 "adrfam": "ipv4", 00:18:30.620 "trsvcid": "$NVMF_PORT", 00:18:30.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.620 "hdgst": ${hdgst:-false}, 00:18:30.620 "ddgst": ${ddgst:-false} 00:18:30.620 }, 00:18:30.620 "method": "bdev_nvme_attach_controller" 00:18:30.620 } 00:18:30.620 EOF 00:18:30.620 )") 00:18:30.620 11:22:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:30.620 11:22:10 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:30.620 11:22:10 -- target/dif.sh@72 -- # (( file <= files )) 00:18:30.620 11:22:10 -- target/dif.sh@73 -- # cat 00:18:30.620 11:22:10 -- nvmf/common.sh@542 -- # cat 00:18:30.620 11:22:10 -- target/dif.sh@72 -- # (( file++ )) 00:18:30.620 11:22:10 -- nvmf/common.sh@544 -- # jq . 00:18:30.620 11:22:10 -- target/dif.sh@72 -- # (( file <= files )) 00:18:30.620 11:22:10 -- nvmf/common.sh@545 -- # IFS=, 00:18:30.620 11:22:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:30.620 "params": { 00:18:30.620 "name": "Nvme0", 00:18:30.620 "trtype": "tcp", 00:18:30.620 "traddr": "10.0.0.2", 00:18:30.620 "adrfam": "ipv4", 00:18:30.620 "trsvcid": "4420", 00:18:30.620 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:30.620 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:30.620 "hdgst": false, 00:18:30.620 "ddgst": false 00:18:30.620 }, 00:18:30.620 "method": "bdev_nvme_attach_controller" 00:18:30.620 },{ 00:18:30.620 "params": { 00:18:30.620 "name": "Nvme1", 00:18:30.620 "trtype": "tcp", 00:18:30.620 "traddr": "10.0.0.2", 00:18:30.620 "adrfam": "ipv4", 00:18:30.620 "trsvcid": "4420", 00:18:30.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.620 "hdgst": false, 00:18:30.620 "ddgst": false 00:18:30.620 }, 00:18:30.620 "method": "bdev_nvme_attach_controller" 00:18:30.620 }' 00:18:30.620 11:22:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:30.620 11:22:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:30.620 11:22:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:30.620 11:22:10 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:30.620 11:22:10 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:18:30.620 11:22:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:30.620 11:22:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:30.620 11:22:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:30.620 11:22:10 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:30.620 11:22:10 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:30.620 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:30.620 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:30.620 fio-3.35 00:18:30.620 Starting 2 threads 00:18:30.620 [2024-10-13 11:22:11.050670] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:30.620 [2024-10-13 11:22:11.050755] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:40.612 00:18:40.613 filename0: (groupid=0, jobs=1): err= 0: pid=74314: Sun Oct 13 11:22:21 2024 00:18:40.613 read: IOPS=5043, BW=19.7MiB/s (20.7MB/s)(197MiB/10001msec) 00:18:40.613 slat (nsec): min=6412, max=63615, avg=13260.95, stdev=4832.26 00:18:40.613 clat (usec): min=581, max=1168, avg=757.10, stdev=68.83 00:18:40.613 lat (usec): min=588, max=1210, avg=770.36, stdev=69.98 00:18:40.613 clat percentiles (usec): 00:18:40.613 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 701], 00:18:40.613 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:18:40.613 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 857], 95.00th=[ 881], 00:18:40.613 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 979], 99.95th=[ 988], 00:18:40.613 | 99.99th=[ 1037] 00:18:40.613 bw ( KiB/s): min=19488, max=20864, per=49.98%, avg=20165.05, stdev=361.14, samples=19 00:18:40.613 iops : min= 4872, max= 5216, avg=5041.26, stdev=90.28, samples=19 00:18:40.613 lat (usec) : 750=51.27%, 1000=48.69% 00:18:40.613 lat (msec) : 2=0.03% 00:18:40.613 cpu : usr=90.19%, sys=8.31%, ctx=15, majf=0, minf=0 00:18:40.613 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:40.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.613 issued rwts: total=50436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.613 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:40.613 filename1: (groupid=0, jobs=1): err= 0: pid=74315: Sun Oct 13 11:22:21 2024 00:18:40.613 read: IOPS=5043, BW=19.7MiB/s (20.7MB/s)(197MiB/10001msec) 00:18:40.613 slat (nsec): min=6318, max=71821, avg=13229.22, stdev=4891.31 00:18:40.613 clat (usec): min=617, max=1322, avg=756.58, stdev=64.13 00:18:40.613 lat (usec): min=626, max=1359, avg=769.81, stdev=64.99 00:18:40.613 clat percentiles (usec): 00:18:40.613 | 1.00th=[ 652], 5.00th=[ 668], 10.00th=[ 685], 20.00th=[ 701], 00:18:40.613 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:18:40.613 | 70.00th=[ 783], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 873], 00:18:40.613 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 971], 99.95th=[ 979], 00:18:40.613 | 99.99th=[ 1004] 00:18:40.613 bw ( KiB/s): min=19488, max=20864, per=49.98%, avg=20165.05, stdev=361.14, samples=19 00:18:40.613 iops : min= 4872, max= 5216, avg=5041.26, stdev=90.28, samples=19 00:18:40.613 lat (usec) : 750=52.56%, 1000=47.43% 00:18:40.613 lat (msec) : 2=0.01% 00:18:40.613 cpu : usr=89.95%, sys=8.62%, ctx=5, majf=0, minf=0 00:18:40.613 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:40.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.613 issued rwts: total=50436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.613 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:40.613 00:18:40.613 Run status group 0 (all jobs): 00:18:40.613 READ: bw=39.4MiB/s (41.3MB/s), 19.7MiB/s-19.7MiB/s (20.7MB/s-20.7MB/s), io=394MiB (413MB), run=10001-10001msec 00:18:40.613 11:22:21 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:18:40.613 11:22:21 -- target/dif.sh@43 -- # local sub 00:18:40.613 11:22:21 -- target/dif.sh@45 -- # for sub in "$@" 00:18:40.613 11:22:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:40.613 11:22:21 -- target/dif.sh@36 -- # local sub_id=0 00:18:40.613 11:22:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:40.613 11:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.613 11:22:21 -- common/autotest_common.sh@10 -- # set +x 00:18:40.613 11:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.613 11:22:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:40.613 11:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.613 11:22:21 -- common/autotest_common.sh@10 -- # set +x 00:18:40.613 11:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.613 11:22:21 -- target/dif.sh@45 -- # for sub in "$@" 00:18:40.613 11:22:21 -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:40.613 11:22:21 -- target/dif.sh@36 -- # local sub_id=1 00:18:40.613 11:22:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:40.613 11:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.613 11:22:21 -- common/autotest_common.sh@10 -- # set +x 00:18:40.613 11:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.613 11:22:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:40.613 11:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.613 11:22:21 -- common/autotest_common.sh@10 -- # set +x 00:18:40.613 ************************************ 00:18:40.613 END TEST fio_dif_1_multi_subsystems 00:18:40.613 ************************************ 00:18:40.613 11:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.613 00:18:40.613 real 0m11.005s 00:18:40.613 user 0m18.705s 00:18:40.613 sys 0m1.929s 00:18:40.613 11:22:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:40.613 11:22:21 -- common/autotest_common.sh@10 -- # set +x 00:18:40.613 11:22:21 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:18:40.613 11:22:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:40.613 11:22:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:40.613 11:22:21 -- common/autotest_common.sh@10 -- # set +x 00:18:40.613 ************************************ 00:18:40.613 START TEST fio_dif_rand_params 00:18:40.613 ************************************ 00:18:40.613 11:22:21 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:18:40.613 11:22:21 -- target/dif.sh@100 -- # local NULL_DIF 00:18:40.613 11:22:21 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:18:40.613 11:22:21 -- target/dif.sh@103 -- # NULL_DIF=3 00:18:40.613 11:22:21 -- target/dif.sh@103 -- # bs=128k 00:18:40.613 11:22:21 -- target/dif.sh@103 -- # numjobs=3 00:18:40.613 11:22:21 -- target/dif.sh@103 -- # iodepth=3 00:18:40.613 11:22:21 -- target/dif.sh@103 -- # runtime=5 00:18:40.613 11:22:21 -- target/dif.sh@105 -- # create_subsystems 0 00:18:40.613 11:22:21 -- target/dif.sh@28 -- # local sub 00:18:40.613 11:22:21 -- target/dif.sh@30 -- # for sub in "$@" 00:18:40.613 11:22:21 -- target/dif.sh@31 -- # create_subsystem 0 00:18:40.613 11:22:21 -- target/dif.sh@18 -- # local sub_id=0 00:18:40.613 11:22:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:18:40.613 11:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.613 11:22:21 -- common/autotest_common.sh@10 -- # set +x 00:18:40.613 bdev_null0 00:18:40.613 11:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.613 11:22:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:40.613 11:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.613 11:22:21 -- common/autotest_common.sh@10 -- # set +x 00:18:40.613 11:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.613 11:22:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:40.613 11:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.613 11:22:21 -- common/autotest_common.sh@10 -- # set +x 00:18:40.613 11:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.613 11:22:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:40.613 11:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.613 11:22:21 -- common/autotest_common.sh@10 -- # set +x 00:18:40.613 [2024-10-13 11:22:21.452428] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.613 11:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.613 11:22:21 -- target/dif.sh@106 -- # fio /dev/fd/62 00:18:40.613 11:22:21 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:18:40.613 11:22:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:40.613 11:22:21 -- nvmf/common.sh@520 -- # config=() 00:18:40.613 11:22:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:40.613 11:22:21 -- nvmf/common.sh@520 -- # local subsystem config 00:18:40.613 11:22:21 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:40.613 11:22:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:40.613 11:22:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:40.613 11:22:21 -- target/dif.sh@82 -- # gen_fio_conf 00:18:40.613 11:22:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:40.613 { 00:18:40.613 "params": { 00:18:40.613 "name": "Nvme$subsystem", 00:18:40.613 "trtype": "$TEST_TRANSPORT", 00:18:40.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:40.613 "adrfam": "ipv4", 00:18:40.613 "trsvcid": "$NVMF_PORT", 00:18:40.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:40.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:40.613 "hdgst": ${hdgst:-false}, 00:18:40.613 "ddgst": ${ddgst:-false} 00:18:40.613 }, 00:18:40.613 "method": "bdev_nvme_attach_controller" 00:18:40.613 } 00:18:40.613 EOF 00:18:40.613 )") 00:18:40.613 11:22:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:40.613 11:22:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:40.613 11:22:21 -- target/dif.sh@54 -- # local file 00:18:40.613 11:22:21 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:40.613 11:22:21 -- target/dif.sh@56 -- # cat 00:18:40.613 11:22:21 -- common/autotest_common.sh@1320 -- # shift 00:18:40.613 11:22:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:40.613 11:22:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:40.613 11:22:21 -- nvmf/common.sh@542 -- # cat 00:18:40.614 11:22:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:40.614 11:22:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:40.614 11:22:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:40.614 11:22:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:40.614 11:22:21 -- target/dif.sh@72 -- # (( file <= files )) 00:18:40.614 11:22:21 -- nvmf/common.sh@544 -- # jq . 00:18:40.614 11:22:21 -- nvmf/common.sh@545 -- # IFS=, 00:18:40.614 11:22:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:40.614 "params": { 00:18:40.614 "name": "Nvme0", 00:18:40.614 "trtype": "tcp", 00:18:40.614 "traddr": "10.0.0.2", 00:18:40.614 "adrfam": "ipv4", 00:18:40.614 "trsvcid": "4420", 00:18:40.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:40.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:40.614 "hdgst": false, 00:18:40.614 "ddgst": false 00:18:40.614 }, 00:18:40.614 "method": "bdev_nvme_attach_controller" 00:18:40.614 }' 00:18:40.614 11:22:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:40.614 11:22:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:40.614 11:22:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:40.614 11:22:21 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:18:40.614 11:22:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:40.614 11:22:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:40.614 11:22:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:40.614 11:22:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:40.614 11:22:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:40.614 11:22:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:40.614 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:18:40.614 ... 00:18:40.614 fio-3.35 00:18:40.614 Starting 3 threads 00:18:40.614 [2024-10-13 11:22:21.992804] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:40.614 [2024-10-13 11:22:21.992868] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:45.883 00:18:45.883 filename0: (groupid=0, jobs=1): err= 0: pid=74465: Sun Oct 13 11:22:27 2024 00:18:45.883 read: IOPS=266, BW=33.3MiB/s (35.0MB/s)(167MiB/5004msec) 00:18:45.883 slat (usec): min=6, max=156, avg=16.19, stdev= 6.91 00:18:45.883 clat (usec): min=4306, max=12330, avg=11207.30, stdev=573.82 00:18:45.883 lat (usec): min=4313, max=12354, avg=11223.50, stdev=574.15 00:18:45.883 clat percentiles (usec): 00:18:45.883 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10683], 20.00th=[10814], 00:18:45.883 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:18:45.883 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11863], 95.00th=[11994], 00:18:45.883 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12256], 99.95th=[12387], 00:18:45.883 | 99.99th=[12387] 00:18:45.883 bw ( KiB/s): min=33090, max=35328, per=33.34%, avg=34105.80, stdev=635.69, samples=10 00:18:45.883 iops : min= 258, max= 276, avg=266.40, stdev= 5.06, samples=10 00:18:45.883 lat (msec) : 10=0.22%, 20=99.78% 00:18:45.883 cpu : usr=90.53%, sys=8.55%, ctx=70, majf=0, minf=9 00:18:45.883 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:45.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.884 issued rwts: total=1335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.884 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:45.884 filename0: (groupid=0, jobs=1): err= 0: pid=74466: Sun Oct 13 11:22:27 2024 00:18:45.884 read: IOPS=266, BW=33.3MiB/s (34.9MB/s)(167MiB/5002msec) 00:18:45.884 slat (nsec): min=6709, max=54491, avg=15066.56, stdev=5794.26 00:18:45.884 clat (usec): min=10330, max=14867, avg=11232.16, stdev=499.28 00:18:45.884 lat (usec): min=10343, max=14895, avg=11247.23, stdev=499.49 00:18:45.884 clat percentiles (usec): 00:18:45.884 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10683], 20.00th=[10814], 00:18:45.884 | 30.00th=[10945], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:18:45.884 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11863], 95.00th=[11994], 00:18:45.884 | 99.00th=[12125], 99.50th=[12256], 99.90th=[14877], 99.95th=[14877], 00:18:45.884 | 99.99th=[14877] 00:18:45.884 bw ( KiB/s): min=33024, max=34560, per=33.19%, avg=33947.33, stdev=493.61, samples=9 00:18:45.884 iops : min= 258, max= 270, avg=265.11, stdev= 3.76, samples=9 00:18:45.884 lat (msec) : 20=100.00% 00:18:45.884 cpu : usr=91.38%, sys=7.98%, ctx=10, majf=0, minf=0 00:18:45.884 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:45.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.884 issued rwts: total=1332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.884 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:45.884 filename0: (groupid=0, jobs=1): err= 0: pid=74467: Sun Oct 13 11:22:27 2024 00:18:45.884 read: IOPS=266, BW=33.3MiB/s (34.9MB/s)(167MiB/5001msec) 00:18:45.884 slat (nsec): min=6944, max=56114, avg=16058.71, stdev=5664.94 00:18:45.884 clat (usec): min=10316, max=13400, avg=11226.19, stdev=481.17 00:18:45.884 lat (usec): min=10328, max=13425, avg=11242.25, stdev=481.41 00:18:45.884 clat percentiles (usec): 00:18:45.884 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10683], 20.00th=[10814], 00:18:45.884 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:18:45.884 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11863], 95.00th=[11994], 00:18:45.884 | 99.00th=[12125], 99.50th=[12256], 99.90th=[13435], 99.95th=[13435], 00:18:45.884 | 99.99th=[13435] 00:18:45.884 bw ( KiB/s): min=33024, max=34560, per=33.20%, avg=33962.67, stdev=512.00, samples=9 00:18:45.884 iops : min= 258, max= 270, avg=265.33, stdev= 4.00, samples=9 00:18:45.884 lat (msec) : 20=100.00% 00:18:45.884 cpu : usr=91.00%, sys=8.34%, ctx=7, majf=0, minf=9 00:18:45.884 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:45.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.884 issued rwts: total=1332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.884 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:45.884 00:18:45.884 Run status group 0 (all jobs): 00:18:45.884 READ: bw=99.9MiB/s (105MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-35.0MB/s), io=500MiB (524MB), run=5001-5004msec 00:18:45.884 11:22:27 -- target/dif.sh@107 -- # destroy_subsystems 0 00:18:45.884 11:22:27 -- target/dif.sh@43 -- # local sub 00:18:45.884 11:22:27 -- target/dif.sh@45 -- # for sub in "$@" 00:18:45.884 11:22:27 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:45.884 11:22:27 -- target/dif.sh@36 -- # local sub_id=0 00:18:45.884 11:22:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:45.884 11:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.884 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 11:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.884 11:22:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:45.884 11:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.884 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 11:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.884 11:22:27 -- target/dif.sh@109 -- # NULL_DIF=2 00:18:45.884 11:22:27 -- target/dif.sh@109 -- # bs=4k 00:18:45.884 11:22:27 -- target/dif.sh@109 -- # numjobs=8 00:18:45.884 11:22:27 -- target/dif.sh@109 -- # iodepth=16 00:18:45.884 11:22:27 -- target/dif.sh@109 -- # runtime= 00:18:45.884 11:22:27 -- target/dif.sh@109 -- # files=2 00:18:45.884 11:22:27 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:18:45.884 11:22:27 -- target/dif.sh@28 -- # local sub 00:18:45.884 11:22:27 -- target/dif.sh@30 -- # for sub in "$@" 00:18:45.884 11:22:27 -- target/dif.sh@31 -- # create_subsystem 0 00:18:45.884 11:22:27 -- target/dif.sh@18 -- # local sub_id=0 00:18:45.884 11:22:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:18:45.884 11:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.884 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 bdev_null0 00:18:45.884 11:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.884 11:22:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:45.884 11:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.884 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 11:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.884 11:22:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:45.884 11:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.884 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 11:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.884 11:22:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:45.884 11:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.884 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 [2024-10-13 11:22:27.347766] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.884 11:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.884 11:22:27 -- target/dif.sh@30 -- # for sub in "$@" 00:18:45.884 11:22:27 -- target/dif.sh@31 -- # create_subsystem 1 00:18:45.884 11:22:27 -- target/dif.sh@18 -- # local sub_id=1 00:18:45.884 11:22:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:18:45.884 11:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.884 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 bdev_null1 00:18:45.884 11:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.884 11:22:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:45.884 11:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.884 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 11:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.884 11:22:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:45.884 11:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.884 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 11:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.884 11:22:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.884 11:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.884 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 11:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.884 11:22:27 -- target/dif.sh@30 -- # for sub in "$@" 00:18:45.884 11:22:27 -- target/dif.sh@31 -- # create_subsystem 2 00:18:45.884 11:22:27 -- target/dif.sh@18 -- # local sub_id=2 00:18:45.884 11:22:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:18:45.884 11:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.884 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 bdev_null2 00:18:45.884 11:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.884 11:22:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:18:45.884 11:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.884 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 11:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.884 11:22:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:18:45.884 11:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.884 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 11:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.884 11:22:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:45.884 11:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.884 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 11:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.884 11:22:27 -- target/dif.sh@112 -- # fio /dev/fd/62 00:18:45.884 11:22:27 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:18:45.884 11:22:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:18:45.884 11:22:27 -- nvmf/common.sh@520 -- # config=() 00:18:45.884 11:22:27 -- nvmf/common.sh@520 -- # local subsystem config 00:18:45.884 11:22:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:45.884 11:22:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:45.884 { 00:18:45.884 "params": { 00:18:45.884 "name": "Nvme$subsystem", 00:18:45.884 "trtype": "$TEST_TRANSPORT", 00:18:45.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:45.884 "adrfam": "ipv4", 00:18:45.884 "trsvcid": "$NVMF_PORT", 00:18:45.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:45.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:45.884 "hdgst": ${hdgst:-false}, 00:18:45.884 "ddgst": ${ddgst:-false} 00:18:45.884 }, 00:18:45.884 "method": "bdev_nvme_attach_controller" 00:18:45.884 } 00:18:45.884 EOF 00:18:45.884 )") 00:18:45.884 11:22:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:45.884 11:22:27 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:45.884 11:22:27 -- target/dif.sh@82 -- # gen_fio_conf 00:18:45.884 11:22:27 -- target/dif.sh@54 -- # local file 00:18:45.884 11:22:27 -- target/dif.sh@56 -- # cat 00:18:45.884 11:22:27 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:45.884 11:22:27 -- nvmf/common.sh@542 -- # cat 00:18:45.884 11:22:27 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:45.884 11:22:27 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:45.884 11:22:27 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:45.884 11:22:27 -- common/autotest_common.sh@1320 -- # shift 00:18:45.884 11:22:27 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:45.884 11:22:27 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:45.884 11:22:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:45.884 11:22:27 -- target/dif.sh@72 -- # (( file <= files )) 00:18:45.884 11:22:27 -- target/dif.sh@73 -- # cat 00:18:45.884 11:22:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:45.884 11:22:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:45.884 { 00:18:45.884 "params": { 00:18:45.884 "name": "Nvme$subsystem", 00:18:45.884 "trtype": "$TEST_TRANSPORT", 00:18:45.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:45.884 "adrfam": "ipv4", 00:18:45.884 "trsvcid": "$NVMF_PORT", 00:18:45.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:45.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:45.884 "hdgst": ${hdgst:-false}, 00:18:45.884 "ddgst": ${ddgst:-false} 00:18:45.884 }, 00:18:45.884 "method": "bdev_nvme_attach_controller" 00:18:45.884 } 00:18:45.884 EOF 00:18:45.884 )") 00:18:45.884 11:22:27 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:45.884 11:22:27 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:45.884 11:22:27 -- nvmf/common.sh@542 -- # cat 00:18:45.884 11:22:27 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:45.884 11:22:27 -- target/dif.sh@72 -- # (( file++ )) 00:18:45.884 11:22:27 -- target/dif.sh@72 -- # (( file <= files )) 00:18:45.884 11:22:27 -- target/dif.sh@73 -- # cat 00:18:45.884 11:22:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:45.884 11:22:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:45.884 { 00:18:45.884 "params": { 00:18:45.884 "name": "Nvme$subsystem", 00:18:45.884 "trtype": "$TEST_TRANSPORT", 00:18:45.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:45.884 "adrfam": "ipv4", 00:18:45.884 "trsvcid": "$NVMF_PORT", 00:18:45.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:45.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:45.884 "hdgst": ${hdgst:-false}, 00:18:45.884 "ddgst": ${ddgst:-false} 00:18:45.884 }, 00:18:45.884 "method": "bdev_nvme_attach_controller" 00:18:45.884 } 00:18:45.884 EOF 00:18:45.884 )") 00:18:45.884 11:22:27 -- nvmf/common.sh@542 -- # cat 00:18:45.884 11:22:27 -- target/dif.sh@72 -- # (( file++ )) 00:18:45.884 11:22:27 -- target/dif.sh@72 -- # (( file <= files )) 00:18:45.884 11:22:27 -- nvmf/common.sh@544 -- # jq . 00:18:45.884 11:22:27 -- nvmf/common.sh@545 -- # IFS=, 00:18:45.884 11:22:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:45.884 "params": { 00:18:45.884 "name": "Nvme0", 00:18:45.884 "trtype": "tcp", 00:18:45.884 "traddr": "10.0.0.2", 00:18:45.884 "adrfam": "ipv4", 00:18:45.884 "trsvcid": "4420", 00:18:45.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:45.884 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:45.884 "hdgst": false, 00:18:45.884 "ddgst": false 00:18:45.884 }, 00:18:45.884 "method": "bdev_nvme_attach_controller" 00:18:45.884 },{ 00:18:45.884 "params": { 00:18:45.884 "name": "Nvme1", 00:18:45.884 "trtype": "tcp", 00:18:45.884 "traddr": "10.0.0.2", 00:18:45.884 "adrfam": "ipv4", 00:18:45.884 "trsvcid": "4420", 00:18:45.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.884 "hdgst": false, 00:18:45.884 "ddgst": false 00:18:45.884 }, 00:18:45.884 "method": "bdev_nvme_attach_controller" 00:18:45.884 },{ 00:18:45.884 "params": { 00:18:45.884 "name": "Nvme2", 00:18:45.884 "trtype": "tcp", 00:18:45.884 "traddr": "10.0.0.2", 00:18:45.884 "adrfam": "ipv4", 00:18:45.884 "trsvcid": "4420", 00:18:45.884 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:45.884 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:45.884 "hdgst": false, 00:18:45.884 "ddgst": false 00:18:45.884 }, 00:18:45.884 "method": "bdev_nvme_attach_controller" 00:18:45.884 }' 00:18:45.884 11:22:27 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:45.884 11:22:27 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:45.884 11:22:27 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:45.884 11:22:27 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:45.884 11:22:27 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:18:45.884 11:22:27 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:46.143 11:22:27 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:46.143 11:22:27 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:46.143 11:22:27 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:46.143 11:22:27 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:46.143 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:46.143 ... 00:18:46.143 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:46.143 ... 00:18:46.143 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:46.143 ... 00:18:46.143 fio-3.35 00:18:46.143 Starting 24 threads 00:18:46.711 [2024-10-13 11:22:28.147603] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:46.711 [2024-10-13 11:22:28.147672] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:58.912 00:18:58.912 filename0: (groupid=0, jobs=1): err= 0: pid=74562: Sun Oct 13 11:22:38 2024 00:18:58.912 read: IOPS=225, BW=902KiB/s (924kB/s)(9024KiB/10006msec) 00:18:58.912 slat (usec): min=3, max=8025, avg=31.36, stdev=322.37 00:18:58.912 clat (msec): min=11, max=222, avg=70.81, stdev=25.15 00:18:58.912 lat (msec): min=11, max=222, avg=70.84, stdev=25.16 00:18:58.912 clat percentiles (msec): 00:18:58.912 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 48], 00:18:58.912 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 73], 00:18:58.912 | 70.00th=[ 81], 80.00th=[ 91], 90.00th=[ 97], 95.00th=[ 105], 00:18:58.912 | 99.00th=[ 146], 99.50th=[ 222], 99.90th=[ 222], 99.95th=[ 222], 00:18:58.912 | 99.99th=[ 222] 00:18:58.912 bw ( KiB/s): min= 384, max= 1296, per=4.13%, avg=893.47, stdev=217.82, samples=19 00:18:58.912 iops : min= 96, max= 324, avg=223.37, stdev=54.46, samples=19 00:18:58.912 lat (msec) : 20=0.31%, 50=24.11%, 100=68.79%, 250=6.78% 00:18:58.912 cpu : usr=35.74%, sys=2.26%, ctx=1222, majf=0, minf=9 00:18:58.912 IO depths : 1=0.2%, 2=1.6%, 4=5.8%, 8=77.2%, 16=15.2%, 32=0.0%, >=64=0.0% 00:18:58.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.912 complete : 0=0.0%, 4=88.7%, 8=10.0%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.912 issued rwts: total=2256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.912 filename0: (groupid=0, jobs=1): err= 0: pid=74563: Sun Oct 13 11:22:38 2024 00:18:58.912 read: IOPS=223, BW=895KiB/s (916kB/s)(8976KiB/10029msec) 00:18:58.912 slat (usec): min=4, max=5025, avg=15.50, stdev=105.91 00:18:58.912 clat (msec): min=21, max=184, avg=71.36, stdev=21.17 00:18:58.912 lat (msec): min=21, max=184, avg=71.38, stdev=21.17 00:18:58.912 clat percentiles (msec): 00:18:58.912 | 1.00th=[ 25], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 52], 00:18:58.912 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 74], 00:18:58.912 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 96], 95.00th=[ 104], 00:18:58.912 | 99.00th=[ 128], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 184], 00:18:58.912 | 99.99th=[ 184] 00:18:58.912 bw ( KiB/s): min= 507, max= 1400, per=4.13%, avg=893.75, stdev=181.07, samples=20 00:18:58.912 iops : min= 126, max= 350, avg=223.40, stdev=45.35, samples=20 00:18:58.912 lat (msec) : 50=18.14%, 100=75.67%, 250=6.19% 00:18:58.912 cpu : usr=37.40%, sys=2.31%, ctx=1289, majf=0, minf=9 00:18:58.912 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.2%, 16=16.7%, 32=0.0%, >=64=0.0% 00:18:58.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.912 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.912 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.912 filename0: (groupid=0, jobs=1): err= 0: pid=74564: Sun Oct 13 11:22:38 2024 00:18:58.912 read: IOPS=223, BW=896KiB/s (917kB/s)(8972KiB/10015msec) 00:18:58.912 slat (usec): min=3, max=8025, avg=25.20, stdev=267.34 00:18:58.912 clat (msec): min=19, max=243, avg=71.32, stdev=25.23 00:18:58.912 lat (msec): min=19, max=243, avg=71.34, stdev=25.23 00:18:58.912 clat percentiles (msec): 00:18:58.912 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 48], 00:18:58.912 | 30.00th=[ 60], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 72], 00:18:58.912 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 96], 95.00th=[ 100], 00:18:58.912 | 99.00th=[ 144], 99.50th=[ 245], 99.90th=[ 245], 99.95th=[ 245], 00:18:58.912 | 99.99th=[ 245] 00:18:58.912 bw ( KiB/s): min= 368, max= 1216, per=4.13%, avg=893.00, stdev=203.96, samples=20 00:18:58.912 iops : min= 92, max= 304, avg=223.25, stdev=50.99, samples=20 00:18:58.912 lat (msec) : 20=0.31%, 50=25.10%, 100=69.68%, 250=4.90% 00:18:58.913 cpu : usr=31.64%, sys=2.15%, ctx=869, majf=0, minf=9 00:18:58.913 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=78.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:18:58.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 issued rwts: total=2243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.913 filename0: (groupid=0, jobs=1): err= 0: pid=74565: Sun Oct 13 11:22:38 2024 00:18:58.913 read: IOPS=226, BW=905KiB/s (927kB/s)(9100KiB/10053msec) 00:18:58.913 slat (usec): min=5, max=4025, avg=17.95, stdev=134.58 00:18:58.913 clat (usec): min=1938, max=139825, avg=70471.56, stdev=23592.20 00:18:58.913 lat (usec): min=1948, max=139839, avg=70489.51, stdev=23590.88 00:18:58.913 clat percentiles (msec): 00:18:58.913 | 1.00th=[ 3], 5.00th=[ 28], 10.00th=[ 46], 20.00th=[ 52], 00:18:58.913 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:18:58.913 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 96], 95.00th=[ 106], 00:18:58.913 | 99.00th=[ 114], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:18:58.913 | 99.99th=[ 140] 00:18:58.913 bw ( KiB/s): min= 640, max= 1389, per=4.19%, avg=905.85, stdev=187.08, samples=20 00:18:58.913 iops : min= 160, max= 347, avg=226.45, stdev=46.74, samples=20 00:18:58.913 lat (msec) : 2=0.18%, 4=2.42%, 10=1.63%, 50=15.25%, 100=73.89% 00:18:58.913 lat (msec) : 250=6.64% 00:18:58.913 cpu : usr=34.58%, sys=1.91%, ctx=995, majf=0, minf=0 00:18:58.913 IO depths : 1=0.3%, 2=1.7%, 4=5.8%, 8=76.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:58.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 complete : 0=0.0%, 4=89.6%, 8=9.2%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 issued rwts: total=2275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.913 filename0: (groupid=0, jobs=1): err= 0: pid=74566: Sun Oct 13 11:22:38 2024 00:18:58.913 read: IOPS=219, BW=879KiB/s (900kB/s)(8816KiB/10034msec) 00:18:58.913 slat (usec): min=6, max=8026, avg=17.48, stdev=170.73 00:18:58.913 clat (msec): min=12, max=167, avg=72.70, stdev=22.29 00:18:58.913 lat (msec): min=12, max=167, avg=72.72, stdev=22.29 00:18:58.913 clat percentiles (msec): 00:18:58.913 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 51], 00:18:58.913 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 72], 00:18:58.913 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 96], 95.00th=[ 108], 00:18:58.913 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 167], 00:18:58.913 | 99.99th=[ 167] 00:18:58.913 bw ( KiB/s): min= 528, max= 1328, per=4.05%, avg=875.20, stdev=180.64, samples=20 00:18:58.913 iops : min= 132, max= 332, avg=218.80, stdev=45.16, samples=20 00:18:58.913 lat (msec) : 20=0.05%, 50=19.87%, 100=71.82%, 250=8.26% 00:18:58.913 cpu : usr=31.35%, sys=1.84%, ctx=844, majf=0, minf=9 00:18:58.913 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:58.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 complete : 0=0.0%, 4=89.0%, 8=10.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 issued rwts: total=2204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.913 filename0: (groupid=0, jobs=1): err= 0: pid=74567: Sun Oct 13 11:22:38 2024 00:18:58.913 read: IOPS=231, BW=926KiB/s (949kB/s)(9264KiB/10001msec) 00:18:58.913 slat (usec): min=4, max=8028, avg=33.50, stdev=371.68 00:18:58.913 clat (msec): min=4, max=210, avg=68.94, stdev=23.71 00:18:58.913 lat (msec): min=4, max=210, avg=68.97, stdev=23.71 00:18:58.913 clat percentiles (msec): 00:18:58.913 | 1.00th=[ 22], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 48], 00:18:58.913 | 30.00th=[ 55], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:18:58.913 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 96], 00:18:58.913 | 99.00th=[ 112], 99.50th=[ 209], 99.90th=[ 209], 99.95th=[ 211], 00:18:58.913 | 99.99th=[ 211] 00:18:58.913 bw ( KiB/s): min= 496, max= 1328, per=4.21%, avg=910.74, stdev=201.23, samples=19 00:18:58.913 iops : min= 124, max= 332, avg=227.68, stdev=50.31, samples=19 00:18:58.913 lat (msec) : 10=0.95%, 50=25.52%, 100=69.99%, 250=3.54% 00:18:58.913 cpu : usr=36.87%, sys=2.29%, ctx=1079, majf=0, minf=9 00:18:58.913 IO depths : 1=0.1%, 2=1.6%, 4=6.2%, 8=77.2%, 16=15.0%, 32=0.0%, >=64=0.0% 00:18:58.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 complete : 0=0.0%, 4=88.6%, 8=10.0%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.913 filename0: (groupid=0, jobs=1): err= 0: pid=74568: Sun Oct 13 11:22:38 2024 00:18:58.913 read: IOPS=234, BW=937KiB/s (960kB/s)(9376KiB/10005msec) 00:18:58.913 slat (usec): min=4, max=4033, avg=24.68, stdev=174.74 00:18:58.913 clat (msec): min=7, max=295, avg=68.17, stdev=24.00 00:18:58.913 lat (msec): min=7, max=295, avg=68.19, stdev=24.00 00:18:58.913 clat percentiles (msec): 00:18:58.913 | 1.00th=[ 24], 5.00th=[ 38], 10.00th=[ 44], 20.00th=[ 48], 00:18:58.913 | 30.00th=[ 53], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:18:58.913 | 70.00th=[ 78], 80.00th=[ 87], 90.00th=[ 95], 95.00th=[ 100], 00:18:58.913 | 99.00th=[ 111], 99.50th=[ 224], 99.90th=[ 224], 99.95th=[ 296], 00:18:58.913 | 99.99th=[ 296] 00:18:58.913 bw ( KiB/s): min= 496, max= 1328, per=4.28%, avg=926.32, stdev=209.17, samples=19 00:18:58.913 iops : min= 124, max= 332, avg=231.58, stdev=52.29, samples=19 00:18:58.913 lat (msec) : 10=0.30%, 50=26.45%, 100=68.52%, 250=4.65%, 500=0.09% 00:18:58.913 cpu : usr=42.33%, sys=2.56%, ctx=1585, majf=0, minf=9 00:18:58.913 IO depths : 1=0.1%, 2=1.5%, 4=5.9%, 8=77.6%, 16=15.0%, 32=0.0%, >=64=0.0% 00:18:58.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 complete : 0=0.0%, 4=88.4%, 8=10.3%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 issued rwts: total=2344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.913 filename0: (groupid=0, jobs=1): err= 0: pid=74569: Sun Oct 13 11:22:38 2024 00:18:58.913 read: IOPS=229, BW=918KiB/s (940kB/s)(9184KiB/10001msec) 00:18:58.913 slat (usec): min=4, max=6034, avg=26.23, stdev=223.53 00:18:58.913 clat (usec): min=1155, max=220973, avg=69550.12, stdev=24826.88 00:18:58.913 lat (usec): min=1163, max=220986, avg=69576.35, stdev=24822.51 00:18:58.913 clat percentiles (msec): 00:18:58.913 | 1.00th=[ 3], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 49], 00:18:58.913 | 30.00th=[ 55], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 73], 00:18:58.913 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 96], 95.00th=[ 101], 00:18:58.913 | 99.00th=[ 131], 99.50th=[ 209], 99.90th=[ 209], 99.95th=[ 222], 00:18:58.913 | 99.99th=[ 222] 00:18:58.913 bw ( KiB/s): min= 512, max= 1328, per=4.13%, avg=893.89, stdev=201.17, samples=19 00:18:58.913 iops : min= 128, max= 332, avg=223.47, stdev=50.29, samples=19 00:18:58.913 lat (msec) : 2=0.17%, 4=0.91%, 10=0.44%, 20=0.52%, 50=21.34% 00:18:58.913 lat (msec) : 100=71.17%, 250=5.44% 00:18:58.913 cpu : usr=41.89%, sys=2.76%, ctx=1433, majf=0, minf=9 00:18:58.913 IO depths : 1=0.1%, 2=2.3%, 4=9.2%, 8=73.9%, 16=14.5%, 32=0.0%, >=64=0.0% 00:18:58.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 complete : 0=0.0%, 4=89.4%, 8=8.6%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 issued rwts: total=2296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.913 filename1: (groupid=0, jobs=1): err= 0: pid=74570: Sun Oct 13 11:22:38 2024 00:18:58.913 read: IOPS=225, BW=903KiB/s (925kB/s)(9060KiB/10034msec) 00:18:58.913 slat (usec): min=4, max=4038, avg=16.76, stdev=84.70 00:18:58.913 clat (msec): min=22, max=160, avg=70.72, stdev=19.65 00:18:58.913 lat (msec): min=22, max=160, avg=70.74, stdev=19.65 00:18:58.913 clat percentiles (msec): 00:18:58.913 | 1.00th=[ 29], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 49], 00:18:58.913 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:18:58.913 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 96], 95.00th=[ 102], 00:18:58.913 | 99.00th=[ 112], 99.50th=[ 112], 99.90th=[ 122], 99.95th=[ 161], 00:18:58.913 | 99.99th=[ 161] 00:18:58.913 bw ( KiB/s): min= 625, max= 1224, per=4.16%, avg=899.65, stdev=158.50, samples=20 00:18:58.913 iops : min= 156, max= 306, avg=224.90, stdev=39.65, samples=20 00:18:58.913 lat (msec) : 50=22.34%, 100=72.05%, 250=5.61% 00:18:58.913 cpu : usr=38.96%, sys=2.37%, ctx=1103, majf=0, minf=9 00:18:58.913 IO depths : 1=0.1%, 2=1.8%, 4=7.0%, 8=75.8%, 16=15.4%, 32=0.0%, >=64=0.0% 00:18:58.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 complete : 0=0.0%, 4=89.3%, 8=9.2%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 issued rwts: total=2265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.913 filename1: (groupid=0, jobs=1): err= 0: pid=74571: Sun Oct 13 11:22:38 2024 00:18:58.913 read: IOPS=223, BW=895KiB/s (916kB/s)(8956KiB/10012msec) 00:18:58.913 slat (usec): min=4, max=8021, avg=23.48, stdev=212.09 00:18:58.913 clat (msec): min=21, max=302, avg=71.40, stdev=24.48 00:18:58.913 lat (msec): min=21, max=302, avg=71.42, stdev=24.48 00:18:58.913 clat percentiles (msec): 00:18:58.913 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 50], 00:18:58.913 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 74], 00:18:58.913 | 70.00th=[ 84], 80.00th=[ 89], 90.00th=[ 96], 95.00th=[ 101], 00:18:58.913 | 99.00th=[ 112], 99.50th=[ 236], 99.90th=[ 236], 99.95th=[ 305], 00:18:58.913 | 99.99th=[ 305] 00:18:58.913 bw ( KiB/s): min= 384, max= 1328, per=4.12%, avg=891.90, stdev=201.38, samples=20 00:18:58.913 iops : min= 96, max= 332, avg=222.95, stdev=50.36, samples=20 00:18:58.913 lat (msec) : 50=20.72%, 100=74.90%, 250=4.29%, 500=0.09% 00:18:58.913 cpu : usr=39.98%, sys=2.28%, ctx=1228, majf=0, minf=9 00:18:58.913 IO depths : 1=0.1%, 2=2.1%, 4=8.5%, 8=74.5%, 16=14.9%, 32=0.0%, >=64=0.0% 00:18:58.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 complete : 0=0.0%, 4=89.4%, 8=8.7%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.913 issued rwts: total=2239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.913 filename1: (groupid=0, jobs=1): err= 0: pid=74572: Sun Oct 13 11:22:38 2024 00:18:58.913 read: IOPS=211, BW=846KiB/s (867kB/s)(8492KiB/10034msec) 00:18:58.913 slat (usec): min=7, max=8023, avg=19.24, stdev=194.46 00:18:58.913 clat (msec): min=21, max=161, avg=75.46, stdev=22.44 00:18:58.913 lat (msec): min=21, max=161, avg=75.48, stdev=22.44 00:18:58.913 clat percentiles (msec): 00:18:58.913 | 1.00th=[ 27], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 57], 00:18:58.913 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 79], 00:18:58.913 | 70.00th=[ 88], 80.00th=[ 95], 90.00th=[ 102], 95.00th=[ 113], 00:18:58.913 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 161], 99.95th=[ 161], 00:18:58.913 | 99.99th=[ 163] 00:18:58.913 bw ( KiB/s): min= 624, max= 1296, per=3.89%, avg=842.80, stdev=168.92, samples=20 00:18:58.913 iops : min= 156, max= 324, avg=210.70, stdev=42.23, samples=20 00:18:58.913 lat (msec) : 50=15.03%, 100=73.62%, 250=11.35% 00:18:58.914 cpu : usr=43.24%, sys=2.46%, ctx=1366, majf=0, minf=9 00:18:58.914 IO depths : 1=0.1%, 2=2.4%, 4=9.5%, 8=72.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:18:58.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 complete : 0=0.0%, 4=90.2%, 8=7.7%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 issued rwts: total=2123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.914 filename1: (groupid=0, jobs=1): err= 0: pid=74573: Sun Oct 13 11:22:38 2024 00:18:58.914 read: IOPS=219, BW=879KiB/s (900kB/s)(8812KiB/10029msec) 00:18:58.914 slat (usec): min=3, max=8027, avg=29.53, stdev=341.10 00:18:58.914 clat (msec): min=22, max=180, avg=72.66, stdev=21.55 00:18:58.914 lat (msec): min=22, max=180, avg=72.69, stdev=21.55 00:18:58.914 clat percentiles (msec): 00:18:58.914 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 57], 00:18:58.914 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:18:58.914 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 96], 95.00th=[ 108], 00:18:58.914 | 99.00th=[ 132], 99.50th=[ 157], 99.90th=[ 165], 99.95th=[ 180], 00:18:58.914 | 99.99th=[ 180] 00:18:58.914 bw ( KiB/s): min= 508, max= 1344, per=4.05%, avg=877.00, stdev=189.32, samples=20 00:18:58.914 iops : min= 127, max= 336, avg=219.25, stdev=47.33, samples=20 00:18:58.914 lat (msec) : 50=18.93%, 100=74.58%, 250=6.49% 00:18:58.914 cpu : usr=31.36%, sys=1.81%, ctx=845, majf=0, minf=9 00:18:58.914 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:18:58.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 complete : 0=0.0%, 4=89.2%, 8=9.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 issued rwts: total=2203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.914 filename1: (groupid=0, jobs=1): err= 0: pid=74574: Sun Oct 13 11:22:38 2024 00:18:58.914 read: IOPS=228, BW=914KiB/s (936kB/s)(9144KiB/10007msec) 00:18:58.914 slat (usec): min=4, max=8036, avg=32.81, stdev=374.48 00:18:58.914 clat (msec): min=9, max=224, avg=69.87, stdev=22.08 00:18:58.914 lat (msec): min=9, max=224, avg=69.90, stdev=22.08 00:18:58.914 clat percentiles (msec): 00:18:58.914 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 48], 00:18:58.914 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:18:58.914 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 101], 00:18:58.914 | 99.00th=[ 120], 99.50th=[ 178], 99.90th=[ 178], 99.95th=[ 226], 00:18:58.914 | 99.99th=[ 226] 00:18:58.914 bw ( KiB/s): min= 512, max= 1296, per=4.21%, avg=910.80, stdev=185.56, samples=20 00:18:58.914 iops : min= 128, max= 324, avg=227.70, stdev=46.39, samples=20 00:18:58.914 lat (msec) : 10=0.26%, 50=24.89%, 100=69.73%, 250=5.12% 00:18:58.914 cpu : usr=35.12%, sys=2.20%, ctx=1012, majf=0, minf=9 00:18:58.914 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:18:58.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 complete : 0=0.0%, 4=88.7%, 8=10.0%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 issued rwts: total=2286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.914 filename1: (groupid=0, jobs=1): err= 0: pid=74575: Sun Oct 13 11:22:38 2024 00:18:58.914 read: IOPS=215, BW=860KiB/s (881kB/s)(8632KiB/10034msec) 00:18:58.914 slat (usec): min=8, max=10021, avg=31.01, stdev=368.68 00:18:58.914 clat (msec): min=21, max=168, avg=74.22, stdev=21.39 00:18:58.914 lat (msec): min=21, max=168, avg=74.25, stdev=21.39 00:18:58.914 clat percentiles (msec): 00:18:58.914 | 1.00th=[ 27], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 59], 00:18:58.914 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 77], 00:18:58.914 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 99], 95.00th=[ 108], 00:18:58.914 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 161], 99.95th=[ 169], 00:18:58.914 | 99.99th=[ 169] 00:18:58.914 bw ( KiB/s): min= 624, max= 1200, per=3.96%, avg=856.80, stdev=146.43, samples=20 00:18:58.914 iops : min= 156, max= 300, avg=214.20, stdev=36.61, samples=20 00:18:58.914 lat (msec) : 50=15.11%, 100=76.55%, 250=8.34% 00:18:58.914 cpu : usr=37.89%, sys=2.43%, ctx=1370, majf=0, minf=9 00:18:58.914 IO depths : 1=0.1%, 2=1.4%, 4=5.5%, 8=76.6%, 16=16.4%, 32=0.0%, >=64=0.0% 00:18:58.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 complete : 0=0.0%, 4=89.5%, 8=9.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 issued rwts: total=2158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.914 filename1: (groupid=0, jobs=1): err= 0: pid=74576: Sun Oct 13 11:22:38 2024 00:18:58.914 read: IOPS=230, BW=923KiB/s (945kB/s)(9232KiB/10002msec) 00:18:58.914 slat (usec): min=4, max=8023, avg=20.79, stdev=186.47 00:18:58.914 clat (msec): min=11, max=291, avg=69.24, stdev=24.41 00:18:58.914 lat (msec): min=11, max=291, avg=69.26, stdev=24.42 00:18:58.914 clat percentiles (msec): 00:18:58.914 | 1.00th=[ 22], 5.00th=[ 38], 10.00th=[ 44], 20.00th=[ 48], 00:18:58.914 | 30.00th=[ 54], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:18:58.914 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 97], 00:18:58.914 | 99.00th=[ 112], 99.50th=[ 224], 99.90th=[ 224], 99.95th=[ 292], 00:18:58.914 | 99.99th=[ 292] 00:18:58.914 bw ( KiB/s): min= 496, max= 1328, per=4.23%, avg=915.05, stdev=199.47, samples=19 00:18:58.914 iops : min= 124, max= 332, avg=228.74, stdev=49.87, samples=19 00:18:58.914 lat (msec) : 20=0.74%, 50=26.08%, 100=69.54%, 250=3.55%, 500=0.09% 00:18:58.914 cpu : usr=35.36%, sys=1.85%, ctx=1022, majf=0, minf=10 00:18:58.914 IO depths : 1=0.1%, 2=1.3%, 4=5.5%, 8=77.9%, 16=15.2%, 32=0.0%, >=64=0.0% 00:18:58.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 complete : 0=0.0%, 4=88.5%, 8=10.3%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 issued rwts: total=2308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.914 filename1: (groupid=0, jobs=1): err= 0: pid=74577: Sun Oct 13 11:22:38 2024 00:18:58.914 read: IOPS=231, BW=926KiB/s (948kB/s)(9280KiB/10022msec) 00:18:58.914 slat (usec): min=4, max=8023, avg=18.41, stdev=166.35 00:18:58.914 clat (msec): min=21, max=231, avg=69.01, stdev=22.01 00:18:58.914 lat (msec): min=21, max=231, avg=69.03, stdev=22.01 00:18:58.914 clat percentiles (msec): 00:18:58.914 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:18:58.914 | 30.00th=[ 60], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:18:58.914 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 100], 00:18:58.914 | 99.00th=[ 124], 99.50th=[ 186], 99.90th=[ 186], 99.95th=[ 232], 00:18:58.914 | 99.99th=[ 232] 00:18:58.914 bw ( KiB/s): min= 500, max= 1328, per=4.27%, avg=923.55, stdev=175.09, samples=20 00:18:58.914 iops : min= 125, max= 332, avg=230.85, stdev=43.83, samples=20 00:18:58.914 lat (msec) : 50=24.35%, 100=71.51%, 250=4.14% 00:18:58.914 cpu : usr=40.00%, sys=2.45%, ctx=1096, majf=0, minf=9 00:18:58.914 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:18:58.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 complete : 0=0.0%, 4=87.6%, 8=12.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.914 filename2: (groupid=0, jobs=1): err= 0: pid=74578: Sun Oct 13 11:22:38 2024 00:18:58.914 read: IOPS=229, BW=919KiB/s (941kB/s)(9204KiB/10016msec) 00:18:58.914 slat (usec): min=3, max=8030, avg=32.59, stdev=373.16 00:18:58.914 clat (msec): min=21, max=229, avg=69.47, stdev=22.81 00:18:58.914 lat (msec): min=21, max=229, avg=69.50, stdev=22.81 00:18:58.914 clat percentiles (msec): 00:18:58.914 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:18:58.914 | 30.00th=[ 52], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 72], 00:18:58.914 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 97], 00:18:58.914 | 99.00th=[ 121], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 230], 00:18:58.914 | 99.99th=[ 230] 00:18:58.914 bw ( KiB/s): min= 495, max= 1296, per=4.24%, avg=916.15, stdev=191.88, samples=20 00:18:58.914 iops : min= 123, max= 324, avg=229.00, stdev=48.06, samples=20 00:18:58.914 lat (msec) : 50=29.03%, 100=67.49%, 250=3.48% 00:18:58.914 cpu : usr=31.27%, sys=1.89%, ctx=841, majf=0, minf=9 00:18:58.914 IO depths : 1=0.1%, 2=1.3%, 4=5.0%, 8=78.3%, 16=15.4%, 32=0.0%, >=64=0.0% 00:18:58.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 complete : 0=0.0%, 4=88.4%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 issued rwts: total=2301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.914 filename2: (groupid=0, jobs=1): err= 0: pid=74579: Sun Oct 13 11:22:38 2024 00:18:58.914 read: IOPS=229, BW=917KiB/s (939kB/s)(9192KiB/10027msec) 00:18:58.914 slat (usec): min=4, max=8027, avg=27.55, stdev=301.13 00:18:58.914 clat (msec): min=22, max=243, avg=69.68, stdev=22.87 00:18:58.914 lat (msec): min=22, max=243, avg=69.71, stdev=22.86 00:18:58.914 clat percentiles (msec): 00:18:58.914 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 48], 00:18:58.914 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:18:58.914 | 70.00th=[ 80], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 102], 00:18:58.914 | 99.00th=[ 123], 99.50th=[ 197], 99.90th=[ 197], 99.95th=[ 245], 00:18:58.914 | 99.99th=[ 245] 00:18:58.914 bw ( KiB/s): min= 383, max= 1344, per=4.22%, avg=912.75, stdev=203.81, samples=20 00:18:58.914 iops : min= 95, max= 336, avg=228.15, stdev=51.06, samples=20 00:18:58.914 lat (msec) : 50=23.93%, 100=70.89%, 250=5.18% 00:18:58.914 cpu : usr=38.35%, sys=2.55%, ctx=1117, majf=0, minf=9 00:18:58.914 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.2%, 16=15.2%, 32=0.0%, >=64=0.0% 00:18:58.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 complete : 0=0.0%, 4=88.7%, 8=10.0%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 issued rwts: total=2298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.914 filename2: (groupid=0, jobs=1): err= 0: pid=74580: Sun Oct 13 11:22:38 2024 00:18:58.914 read: IOPS=229, BW=917KiB/s (939kB/s)(9188KiB/10017msec) 00:18:58.914 slat (usec): min=4, max=4022, avg=16.60, stdev=83.76 00:18:58.914 clat (msec): min=22, max=230, avg=69.67, stdev=22.56 00:18:58.914 lat (msec): min=22, max=230, avg=69.69, stdev=22.57 00:18:58.914 clat percentiles (msec): 00:18:58.914 | 1.00th=[ 24], 5.00th=[ 38], 10.00th=[ 45], 20.00th=[ 48], 00:18:58.914 | 30.00th=[ 57], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 72], 00:18:58.914 | 70.00th=[ 80], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 104], 00:18:58.914 | 99.00th=[ 125], 99.50th=[ 186], 99.90th=[ 186], 99.95th=[ 230], 00:18:58.914 | 99.99th=[ 230] 00:18:58.914 bw ( KiB/s): min= 493, max= 1352, per=4.22%, avg=912.05, stdev=199.02, samples=20 00:18:58.914 iops : min= 123, max= 338, avg=228.00, stdev=49.78, samples=20 00:18:58.914 lat (msec) : 50=24.03%, 100=70.48%, 250=5.49% 00:18:58.914 cpu : usr=36.84%, sys=2.18%, ctx=1217, majf=0, minf=9 00:18:58.914 IO depths : 1=0.1%, 2=1.6%, 4=6.1%, 8=77.1%, 16=15.2%, 32=0.0%, >=64=0.0% 00:18:58.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.914 complete : 0=0.0%, 4=88.7%, 8=10.0%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.915 issued rwts: total=2297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.915 filename2: (groupid=0, jobs=1): err= 0: pid=74581: Sun Oct 13 11:22:38 2024 00:18:58.915 read: IOPS=223, BW=893KiB/s (915kB/s)(8964KiB/10034msec) 00:18:58.915 slat (usec): min=6, max=5023, avg=25.85, stdev=216.80 00:18:58.915 clat (msec): min=21, max=183, avg=71.46, stdev=22.60 00:18:58.915 lat (msec): min=21, max=183, avg=71.49, stdev=22.60 00:18:58.915 clat percentiles (msec): 00:18:58.915 | 1.00th=[ 24], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 50], 00:18:58.915 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 74], 00:18:58.915 | 70.00th=[ 82], 80.00th=[ 89], 90.00th=[ 99], 95.00th=[ 110], 00:18:58.915 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 184], 00:18:58.915 | 99.99th=[ 184] 00:18:58.915 bw ( KiB/s): min= 496, max= 1344, per=4.12%, avg=890.00, stdev=189.72, samples=20 00:18:58.915 iops : min= 124, max= 336, avg=222.50, stdev=47.43, samples=20 00:18:58.915 lat (msec) : 50=21.46%, 100=70.01%, 250=8.52% 00:18:58.915 cpu : usr=37.58%, sys=2.34%, ctx=1129, majf=0, minf=9 00:18:58.915 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:18:58.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.915 complete : 0=0.0%, 4=88.7%, 8=10.3%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.915 issued rwts: total=2241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.915 filename2: (groupid=0, jobs=1): err= 0: pid=74582: Sun Oct 13 11:22:38 2024 00:18:58.915 read: IOPS=232, BW=930KiB/s (953kB/s)(9348KiB/10049msec) 00:18:58.915 slat (usec): min=4, max=8035, avg=23.02, stdev=244.00 00:18:58.915 clat (usec): min=919, max=139350, avg=68625.30, stdev=23208.05 00:18:58.915 lat (usec): min=929, max=139365, avg=68648.31, stdev=23208.48 00:18:58.915 clat percentiles (msec): 00:18:58.915 | 1.00th=[ 4], 5.00th=[ 33], 10.00th=[ 44], 20.00th=[ 48], 00:18:58.915 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 72], 00:18:58.915 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 96], 95.00th=[ 102], 00:18:58.915 | 99.00th=[ 117], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:18:58.915 | 99.99th=[ 140] 00:18:58.915 bw ( KiB/s): min= 640, max= 1296, per=4.29%, avg=928.40, stdev=166.33, samples=20 00:18:58.915 iops : min= 160, max= 324, avg=232.10, stdev=41.58, samples=20 00:18:58.915 lat (usec) : 1000=0.09% 00:18:58.915 lat (msec) : 4=2.57%, 10=0.77%, 50=20.62%, 100=70.95%, 250=5.01% 00:18:58.915 cpu : usr=38.80%, sys=2.43%, ctx=1262, majf=0, minf=9 00:18:58.915 IO depths : 1=0.2%, 2=1.3%, 4=4.7%, 8=77.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:18:58.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.915 complete : 0=0.0%, 4=88.9%, 8=10.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.915 issued rwts: total=2337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.915 filename2: (groupid=0, jobs=1): err= 0: pid=74583: Sun Oct 13 11:22:38 2024 00:18:58.915 read: IOPS=226, BW=905KiB/s (927kB/s)(9084KiB/10035msec) 00:18:58.915 slat (usec): min=7, max=8033, avg=36.40, stdev=381.59 00:18:58.915 clat (msec): min=21, max=161, avg=70.49, stdev=19.79 00:18:58.915 lat (msec): min=21, max=161, avg=70.52, stdev=19.79 00:18:58.915 clat percentiles (msec): 00:18:58.915 | 1.00th=[ 27], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 50], 00:18:58.915 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:18:58.915 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 102], 00:18:58.915 | 99.00th=[ 114], 99.50th=[ 117], 99.90th=[ 121], 99.95th=[ 161], 00:18:58.915 | 99.99th=[ 161] 00:18:58.915 bw ( KiB/s): min= 624, max= 1360, per=4.17%, avg=902.00, stdev=169.58, samples=20 00:18:58.915 iops : min= 156, max= 340, avg=225.50, stdev=42.40, samples=20 00:18:58.915 lat (msec) : 50=21.97%, 100=72.88%, 250=5.15% 00:18:58.915 cpu : usr=37.03%, sys=2.11%, ctx=1103, majf=0, minf=9 00:18:58.915 IO depths : 1=0.1%, 2=1.5%, 4=6.1%, 8=76.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:18:58.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.915 complete : 0=0.0%, 4=89.1%, 8=9.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.915 issued rwts: total=2271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.915 filename2: (groupid=0, jobs=1): err= 0: pid=74584: Sun Oct 13 11:22:38 2024 00:18:58.915 read: IOPS=221, BW=888KiB/s (909kB/s)(8908KiB/10032msec) 00:18:58.915 slat (usec): min=4, max=8038, avg=36.34, stdev=397.86 00:18:58.915 clat (msec): min=20, max=207, avg=71.87, stdev=20.73 00:18:58.915 lat (msec): min=20, max=207, avg=71.91, stdev=20.74 00:18:58.915 clat percentiles (msec): 00:18:58.915 | 1.00th=[ 26], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 52], 00:18:58.915 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:18:58.915 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 96], 95.00th=[ 101], 00:18:58.915 | 99.00th=[ 120], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 207], 00:18:58.915 | 99.99th=[ 207] 00:18:58.915 bw ( KiB/s): min= 496, max= 1384, per=4.09%, avg=884.40, stdev=187.70, samples=20 00:18:58.915 iops : min= 124, max= 346, avg=221.10, stdev=46.92, samples=20 00:18:58.915 lat (msec) : 50=19.22%, 100=75.93%, 250=4.85% 00:18:58.915 cpu : usr=33.94%, sys=2.10%, ctx=978, majf=0, minf=9 00:18:58.915 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=77.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:18:58.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.915 complete : 0=0.0%, 4=89.1%, 8=9.7%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.915 issued rwts: total=2227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.915 filename2: (groupid=0, jobs=1): err= 0: pid=74585: Sun Oct 13 11:22:38 2024 00:18:58.915 read: IOPS=228, BW=913KiB/s (935kB/s)(9148KiB/10019msec) 00:18:58.915 slat (usec): min=4, max=8051, avg=32.55, stdev=348.11 00:18:58.915 clat (msec): min=20, max=232, avg=69.92, stdev=22.55 00:18:58.915 lat (msec): min=20, max=232, avg=69.95, stdev=22.55 00:18:58.915 clat percentiles (msec): 00:18:58.915 | 1.00th=[ 24], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 48], 00:18:58.915 | 30.00th=[ 57], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 73], 00:18:58.915 | 70.00th=[ 80], 80.00th=[ 92], 90.00th=[ 96], 95.00th=[ 101], 00:18:58.915 | 99.00th=[ 126], 99.50th=[ 188], 99.90th=[ 188], 99.95th=[ 234], 00:18:58.915 | 99.99th=[ 234] 00:18:58.915 bw ( KiB/s): min= 380, max= 1408, per=4.21%, avg=910.35, stdev=208.19, samples=20 00:18:58.915 iops : min= 95, max= 352, avg=227.55, stdev=52.08, samples=20 00:18:58.915 lat (msec) : 50=24.01%, 100=70.84%, 250=5.16% 00:18:58.915 cpu : usr=37.22%, sys=2.45%, ctx=1119, majf=0, minf=9 00:18:58.915 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=78.1%, 16=15.4%, 32=0.0%, >=64=0.0% 00:18:58.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.915 complete : 0=0.0%, 4=88.5%, 8=10.3%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.915 issued rwts: total=2287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:58.915 00:18:58.915 Run status group 0 (all jobs): 00:18:58.915 READ: bw=21.1MiB/s (22.1MB/s), 846KiB/s-937KiB/s (867kB/s-960kB/s), io=212MiB (223MB), run=10001-10053msec 00:18:58.915 11:22:38 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:18:58.915 11:22:38 -- target/dif.sh@43 -- # local sub 00:18:58.915 11:22:38 -- target/dif.sh@45 -- # for sub in "$@" 00:18:58.915 11:22:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:58.915 11:22:38 -- target/dif.sh@36 -- # local sub_id=0 00:18:58.915 11:22:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:58.915 11:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.915 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:18:58.915 11:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.915 11:22:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:58.915 11:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.915 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:18:58.915 11:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.915 11:22:38 -- target/dif.sh@45 -- # for sub in "$@" 00:18:58.915 11:22:38 -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:58.915 11:22:38 -- target/dif.sh@36 -- # local sub_id=1 00:18:58.915 11:22:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:58.915 11:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.915 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:18:58.915 11:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.915 11:22:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:58.915 11:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.915 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:18:58.915 11:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.915 11:22:38 -- target/dif.sh@45 -- # for sub in "$@" 00:18:58.915 11:22:38 -- target/dif.sh@46 -- # destroy_subsystem 2 00:18:58.915 11:22:38 -- target/dif.sh@36 -- # local sub_id=2 00:18:58.915 11:22:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:58.915 11:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.915 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:18:58.915 11:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.915 11:22:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:18:58.915 11:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.915 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:18:58.915 11:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.915 11:22:38 -- target/dif.sh@115 -- # NULL_DIF=1 00:18:58.915 11:22:38 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:18:58.915 11:22:38 -- target/dif.sh@115 -- # numjobs=2 00:18:58.915 11:22:38 -- target/dif.sh@115 -- # iodepth=8 00:18:58.915 11:22:38 -- target/dif.sh@115 -- # runtime=5 00:18:58.915 11:22:38 -- target/dif.sh@115 -- # files=1 00:18:58.915 11:22:38 -- target/dif.sh@117 -- # create_subsystems 0 1 00:18:58.915 11:22:38 -- target/dif.sh@28 -- # local sub 00:18:58.915 11:22:38 -- target/dif.sh@30 -- # for sub in "$@" 00:18:58.915 11:22:38 -- target/dif.sh@31 -- # create_subsystem 0 00:18:58.915 11:22:38 -- target/dif.sh@18 -- # local sub_id=0 00:18:58.915 11:22:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:58.915 11:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.915 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:18:58.915 bdev_null0 00:18:58.915 11:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.915 11:22:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:58.915 11:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.915 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:18:58.915 11:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.915 11:22:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:58.915 11:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.915 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:18:58.915 11:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.915 11:22:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:58.916 11:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.916 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:18:58.916 [2024-10-13 11:22:38.605295] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.916 11:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.916 11:22:38 -- target/dif.sh@30 -- # for sub in "$@" 00:18:58.916 11:22:38 -- target/dif.sh@31 -- # create_subsystem 1 00:18:58.916 11:22:38 -- target/dif.sh@18 -- # local sub_id=1 00:18:58.916 11:22:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:58.916 11:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.916 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:18:58.916 bdev_null1 00:18:58.916 11:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.916 11:22:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:58.916 11:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.916 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:18:58.916 11:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.916 11:22:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:58.916 11:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.916 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:18:58.916 11:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.916 11:22:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.916 11:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.916 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:18:58.916 11:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.916 11:22:38 -- target/dif.sh@118 -- # fio /dev/fd/62 00:18:58.916 11:22:38 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:18:58.916 11:22:38 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:58.916 11:22:38 -- nvmf/common.sh@520 -- # config=() 00:18:58.916 11:22:38 -- nvmf/common.sh@520 -- # local subsystem config 00:18:58.916 11:22:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:58.916 11:22:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:58.916 { 00:18:58.916 "params": { 00:18:58.916 "name": "Nvme$subsystem", 00:18:58.916 "trtype": "$TEST_TRANSPORT", 00:18:58.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:58.916 "adrfam": "ipv4", 00:18:58.916 "trsvcid": "$NVMF_PORT", 00:18:58.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:58.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:58.916 "hdgst": ${hdgst:-false}, 00:18:58.916 "ddgst": ${ddgst:-false} 00:18:58.916 }, 00:18:58.916 "method": "bdev_nvme_attach_controller" 00:18:58.916 } 00:18:58.916 EOF 00:18:58.916 )") 00:18:58.916 11:22:38 -- target/dif.sh@82 -- # gen_fio_conf 00:18:58.916 11:22:38 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:58.916 11:22:38 -- target/dif.sh@54 -- # local file 00:18:58.916 11:22:38 -- target/dif.sh@56 -- # cat 00:18:58.916 11:22:38 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:58.916 11:22:38 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:58.916 11:22:38 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:58.916 11:22:38 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:58.916 11:22:38 -- nvmf/common.sh@542 -- # cat 00:18:58.916 11:22:38 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:58.916 11:22:38 -- common/autotest_common.sh@1320 -- # shift 00:18:58.916 11:22:38 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:58.916 11:22:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.916 11:22:38 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:58.916 11:22:38 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:58.916 11:22:38 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:58.916 11:22:38 -- target/dif.sh@72 -- # (( file <= files )) 00:18:58.916 11:22:38 -- target/dif.sh@73 -- # cat 00:18:58.916 11:22:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:58.916 11:22:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:58.916 11:22:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:58.916 { 00:18:58.916 "params": { 00:18:58.916 "name": "Nvme$subsystem", 00:18:58.916 "trtype": "$TEST_TRANSPORT", 00:18:58.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:58.916 "adrfam": "ipv4", 00:18:58.916 "trsvcid": "$NVMF_PORT", 00:18:58.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:58.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:58.916 "hdgst": ${hdgst:-false}, 00:18:58.916 "ddgst": ${ddgst:-false} 00:18:58.916 }, 00:18:58.916 "method": "bdev_nvme_attach_controller" 00:18:58.916 } 00:18:58.916 EOF 00:18:58.916 )") 00:18:58.916 11:22:38 -- nvmf/common.sh@542 -- # cat 00:18:58.916 11:22:38 -- target/dif.sh@72 -- # (( file++ )) 00:18:58.916 11:22:38 -- target/dif.sh@72 -- # (( file <= files )) 00:18:58.916 11:22:38 -- nvmf/common.sh@544 -- # jq . 00:18:58.916 11:22:38 -- nvmf/common.sh@545 -- # IFS=, 00:18:58.916 11:22:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:58.916 "params": { 00:18:58.916 "name": "Nvme0", 00:18:58.916 "trtype": "tcp", 00:18:58.916 "traddr": "10.0.0.2", 00:18:58.916 "adrfam": "ipv4", 00:18:58.916 "trsvcid": "4420", 00:18:58.916 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:58.916 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:58.916 "hdgst": false, 00:18:58.916 "ddgst": false 00:18:58.916 }, 00:18:58.916 "method": "bdev_nvme_attach_controller" 00:18:58.916 },{ 00:18:58.916 "params": { 00:18:58.916 "name": "Nvme1", 00:18:58.916 "trtype": "tcp", 00:18:58.916 "traddr": "10.0.0.2", 00:18:58.916 "adrfam": "ipv4", 00:18:58.916 "trsvcid": "4420", 00:18:58.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.916 "hdgst": false, 00:18:58.916 "ddgst": false 00:18:58.916 }, 00:18:58.916 "method": "bdev_nvme_attach_controller" 00:18:58.916 }' 00:18:58.916 11:22:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:58.916 11:22:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:58.916 11:22:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.916 11:22:38 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:58.916 11:22:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:58.916 11:22:38 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:18:58.916 11:22:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:58.916 11:22:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:58.916 11:22:38 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:58.916 11:22:38 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:58.916 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:18:58.916 ... 00:18:58.916 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:18:58.916 ... 00:18:58.916 fio-3.35 00:18:58.916 Starting 4 threads 00:18:58.916 [2024-10-13 11:22:39.228061] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:58.916 [2024-10-13 11:22:39.228127] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:03.104 00:19:03.104 filename0: (groupid=0, jobs=1): err= 0: pid=74736: Sun Oct 13 11:22:44 2024 00:19:03.104 read: IOPS=2262, BW=17.7MiB/s (18.5MB/s)(88.4MiB/5002msec) 00:19:03.104 slat (nsec): min=3372, max=65299, avg=15561.25, stdev=4940.00 00:19:03.104 clat (usec): min=1458, max=6648, avg=3499.74, stdev=1069.16 00:19:03.104 lat (usec): min=1471, max=6662, avg=3515.30, stdev=1068.63 00:19:03.104 clat percentiles (usec): 00:19:03.104 | 1.00th=[ 1909], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2474], 00:19:03.104 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2966], 60.00th=[ 4424], 00:19:03.104 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 4883], 00:19:03.104 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 5211], 99.95th=[ 5276], 00:19:03.104 | 99.99th=[ 5342] 00:19:03.104 bw ( KiB/s): min=17010, max=18368, per=26.59%, avg=18090.89, stdev=424.27, samples=9 00:19:03.104 iops : min= 2126, max= 2296, avg=2261.33, stdev=53.11, samples=9 00:19:03.104 lat (msec) : 2=2.68%, 4=50.80%, 10=46.52% 00:19:03.104 cpu : usr=91.72%, sys=7.26%, ctx=8, majf=0, minf=9 00:19:03.104 IO depths : 1=0.1%, 2=0.8%, 4=63.2%, 8=36.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.104 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.104 issued rwts: total=11315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.104 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:03.104 filename0: (groupid=0, jobs=1): err= 0: pid=74737: Sun Oct 13 11:22:44 2024 00:19:03.104 read: IOPS=1870, BW=14.6MiB/s (15.3MB/s)(73.1MiB/5003msec) 00:19:03.104 slat (nsec): min=6697, max=77380, avg=11710.50, stdev=5006.38 00:19:03.104 clat (usec): min=1178, max=6272, avg=4236.25, stdev=967.15 00:19:03.104 lat (usec): min=1185, max=6305, avg=4247.96, stdev=967.18 00:19:03.104 clat percentiles (usec): 00:19:03.104 | 1.00th=[ 1876], 5.00th=[ 2057], 10.00th=[ 2474], 20.00th=[ 3032], 00:19:03.104 | 30.00th=[ 4490], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:19:03.104 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5014], 00:19:03.104 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 5997], 99.95th=[ 6194], 00:19:03.104 | 99.99th=[ 6259] 00:19:03.104 bw ( KiB/s): min=13056, max=19280, per=22.28%, avg=15157.33, stdev=2612.74, samples=9 00:19:03.104 iops : min= 1632, max= 2410, avg=1894.67, stdev=326.59, samples=9 00:19:03.104 lat (msec) : 2=3.74%, 4=19.54%, 10=76.72% 00:19:03.104 cpu : usr=91.82%, sys=7.30%, ctx=10, majf=0, minf=9 00:19:03.104 IO depths : 1=0.1%, 2=14.9%, 4=55.5%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.104 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.104 issued rwts: total=9358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.104 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:03.104 filename1: (groupid=0, jobs=1): err= 0: pid=74738: Sun Oct 13 11:22:44 2024 00:19:03.104 read: IOPS=2083, BW=16.3MiB/s (17.1MB/s)(81.4MiB/5002msec) 00:19:03.104 slat (nsec): min=3248, max=57380, avg=14403.79, stdev=5348.23 00:19:03.104 clat (usec): min=1667, max=6204, avg=3801.07, stdev=1084.53 00:19:03.104 lat (usec): min=1677, max=6230, avg=3815.47, stdev=1082.81 00:19:03.104 clat percentiles (usec): 00:19:03.104 | 1.00th=[ 2180], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2507], 00:19:03.104 | 30.00th=[ 2606], 40.00th=[ 3130], 50.00th=[ 4490], 60.00th=[ 4555], 00:19:03.104 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5014], 00:19:03.104 | 99.00th=[ 5145], 99.50th=[ 5342], 99.90th=[ 5997], 99.95th=[ 6063], 00:19:03.104 | 99.99th=[ 6128] 00:19:03.104 bw ( KiB/s): min=13072, max=18368, per=24.25%, avg=16502.56, stdev=2338.35, samples=9 00:19:03.104 iops : min= 1634, max= 2296, avg=2062.78, stdev=292.34, samples=9 00:19:03.104 lat (msec) : 2=0.21%, 4=40.42%, 10=59.36% 00:19:03.104 cpu : usr=92.02%, sys=7.04%, ctx=8, majf=0, minf=0 00:19:03.104 IO depths : 1=0.1%, 2=6.6%, 4=60.1%, 8=33.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.104 complete : 0=0.0%, 4=97.5%, 8=2.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.104 issued rwts: total=10422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.104 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:03.104 filename1: (groupid=0, jobs=1): err= 0: pid=74739: Sun Oct 13 11:22:44 2024 00:19:03.104 read: IOPS=2291, BW=17.9MiB/s (18.8MB/s)(89.6MiB/5004msec) 00:19:03.104 slat (nsec): min=6900, max=63421, avg=13400.03, stdev=5255.85 00:19:03.104 clat (usec): min=1264, max=5362, avg=3460.19, stdev=1080.34 00:19:03.104 lat (usec): min=1272, max=5377, avg=3473.59, stdev=1080.10 00:19:03.104 clat percentiles (usec): 00:19:03.104 | 1.00th=[ 1893], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2442], 00:19:03.104 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2868], 60.00th=[ 4359], 00:19:03.104 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 4883], 00:19:03.104 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 5211], 99.95th=[ 5211], 00:19:03.104 | 99.99th=[ 5342] 00:19:03.104 bw ( KiB/s): min=18016, max=19511, per=26.96%, avg=18345.90, stdev=429.07, samples=10 00:19:03.104 iops : min= 2252, max= 2438, avg=2293.00, stdev=53.39, samples=10 00:19:03.104 lat (msec) : 2=3.16%, 4=52.64%, 10=44.20% 00:19:03.104 cpu : usr=92.32%, sys=6.68%, ctx=8, majf=0, minf=0 00:19:03.104 IO depths : 1=0.1%, 2=0.1%, 4=63.6%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.104 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.104 issued rwts: total=11465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.104 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:03.104 00:19:03.104 Run status group 0 (all jobs): 00:19:03.104 READ: bw=66.4MiB/s (69.7MB/s), 14.6MiB/s-17.9MiB/s (15.3MB/s-18.8MB/s), io=333MiB (349MB), run=5002-5004msec 00:19:03.105 11:22:44 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:03.105 11:22:44 -- target/dif.sh@43 -- # local sub 00:19:03.105 11:22:44 -- target/dif.sh@45 -- # for sub in "$@" 00:19:03.105 11:22:44 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:03.105 11:22:44 -- target/dif.sh@36 -- # local sub_id=0 00:19:03.105 11:22:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:03.105 11:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:03.105 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:19:03.105 11:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:03.105 11:22:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:03.105 11:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:03.105 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:19:03.105 11:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:03.105 11:22:44 -- target/dif.sh@45 -- # for sub in "$@" 00:19:03.105 11:22:44 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:03.105 11:22:44 -- target/dif.sh@36 -- # local sub_id=1 00:19:03.105 11:22:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:03.105 11:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:03.105 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:19:03.105 11:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:03.105 11:22:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:03.105 11:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:03.105 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:19:03.105 ************************************ 00:19:03.105 END TEST fio_dif_rand_params 00:19:03.105 ************************************ 00:19:03.105 11:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:03.105 00:19:03.105 real 0m23.136s 00:19:03.105 user 2m2.870s 00:19:03.105 sys 0m8.772s 00:19:03.105 11:22:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:03.105 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:19:03.105 11:22:44 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:03.105 11:22:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:03.105 11:22:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:03.105 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:19:03.105 ************************************ 00:19:03.105 START TEST fio_dif_digest 00:19:03.105 ************************************ 00:19:03.105 11:22:44 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:19:03.105 11:22:44 -- target/dif.sh@123 -- # local NULL_DIF 00:19:03.105 11:22:44 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:03.105 11:22:44 -- target/dif.sh@125 -- # local hdgst ddgst 00:19:03.105 11:22:44 -- target/dif.sh@127 -- # NULL_DIF=3 00:19:03.105 11:22:44 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:03.105 11:22:44 -- target/dif.sh@127 -- # numjobs=3 00:19:03.105 11:22:44 -- target/dif.sh@127 -- # iodepth=3 00:19:03.105 11:22:44 -- target/dif.sh@127 -- # runtime=10 00:19:03.105 11:22:44 -- target/dif.sh@128 -- # hdgst=true 00:19:03.105 11:22:44 -- target/dif.sh@128 -- # ddgst=true 00:19:03.105 11:22:44 -- target/dif.sh@130 -- # create_subsystems 0 00:19:03.105 11:22:44 -- target/dif.sh@28 -- # local sub 00:19:03.105 11:22:44 -- target/dif.sh@30 -- # for sub in "$@" 00:19:03.105 11:22:44 -- target/dif.sh@31 -- # create_subsystem 0 00:19:03.105 11:22:44 -- target/dif.sh@18 -- # local sub_id=0 00:19:03.105 11:22:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:03.105 11:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:03.105 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:19:03.105 bdev_null0 00:19:03.105 11:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:03.105 11:22:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:03.105 11:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:03.105 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:19:03.105 11:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:03.105 11:22:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:03.105 11:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:03.105 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:19:03.105 11:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:03.105 11:22:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:03.105 11:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:03.105 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:19:03.105 [2024-10-13 11:22:44.650793] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.105 11:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:03.105 11:22:44 -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:03.105 11:22:44 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:03.105 11:22:44 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:03.105 11:22:44 -- nvmf/common.sh@520 -- # config=() 00:19:03.105 11:22:44 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:03.105 11:22:44 -- nvmf/common.sh@520 -- # local subsystem config 00:19:03.105 11:22:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:03.105 11:22:44 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:03.105 11:22:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:03.105 { 00:19:03.105 "params": { 00:19:03.105 "name": "Nvme$subsystem", 00:19:03.105 "trtype": "$TEST_TRANSPORT", 00:19:03.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.105 "adrfam": "ipv4", 00:19:03.105 "trsvcid": "$NVMF_PORT", 00:19:03.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.105 "hdgst": ${hdgst:-false}, 00:19:03.105 "ddgst": ${ddgst:-false} 00:19:03.105 }, 00:19:03.105 "method": "bdev_nvme_attach_controller" 00:19:03.105 } 00:19:03.105 EOF 00:19:03.105 )") 00:19:03.105 11:22:44 -- target/dif.sh@82 -- # gen_fio_conf 00:19:03.105 11:22:44 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:19:03.105 11:22:44 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:03.105 11:22:44 -- target/dif.sh@54 -- # local file 00:19:03.105 11:22:44 -- common/autotest_common.sh@1318 -- # local sanitizers 00:19:03.105 11:22:44 -- target/dif.sh@56 -- # cat 00:19:03.105 11:22:44 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.105 11:22:44 -- common/autotest_common.sh@1320 -- # shift 00:19:03.105 11:22:44 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:19:03.105 11:22:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.105 11:22:44 -- nvmf/common.sh@542 -- # cat 00:19:03.105 11:22:44 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.105 11:22:44 -- common/autotest_common.sh@1324 -- # grep libasan 00:19:03.105 11:22:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:03.105 11:22:44 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:03.105 11:22:44 -- target/dif.sh@72 -- # (( file <= files )) 00:19:03.105 11:22:44 -- nvmf/common.sh@544 -- # jq . 00:19:03.105 11:22:44 -- nvmf/common.sh@545 -- # IFS=, 00:19:03.105 11:22:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:03.105 "params": { 00:19:03.105 "name": "Nvme0", 00:19:03.105 "trtype": "tcp", 00:19:03.105 "traddr": "10.0.0.2", 00:19:03.105 "adrfam": "ipv4", 00:19:03.105 "trsvcid": "4420", 00:19:03.105 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:03.105 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:03.105 "hdgst": true, 00:19:03.105 "ddgst": true 00:19:03.105 }, 00:19:03.105 "method": "bdev_nvme_attach_controller" 00:19:03.105 }' 00:19:03.105 11:22:44 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:03.105 11:22:44 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:03.105 11:22:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.105 11:22:44 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.105 11:22:44 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:19:03.105 11:22:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:03.364 11:22:44 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:03.364 11:22:44 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:03.364 11:22:44 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:03.364 11:22:44 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:03.364 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:03.364 ... 00:19:03.364 fio-3.35 00:19:03.364 Starting 3 threads 00:19:03.622 [2024-10-13 11:22:45.191286] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:03.622 [2024-10-13 11:22:45.191580] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:15.868 00:19:15.868 filename0: (groupid=0, jobs=1): err= 0: pid=74845: Sun Oct 13 11:22:55 2024 00:19:15.868 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(292MiB/10004msec) 00:19:15.868 slat (nsec): min=6890, max=52427, avg=15269.16, stdev=5936.14 00:19:15.868 clat (usec): min=11744, max=16357, avg=12808.97, stdev=535.82 00:19:15.868 lat (usec): min=11758, max=16385, avg=12824.24, stdev=535.92 00:19:15.868 clat percentiles (usec): 00:19:15.868 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12125], 20.00th=[12387], 00:19:15.868 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:19:15.868 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13566], 95.00th=[13698], 00:19:15.868 | 99.00th=[13960], 99.50th=[14091], 99.90th=[16319], 99.95th=[16319], 00:19:15.868 | 99.99th=[16319] 00:19:15.868 bw ( KiB/s): min=29184, max=31488, per=33.30%, avg=29871.16, stdev=566.38, samples=19 00:19:15.868 iops : min= 228, max= 246, avg=233.37, stdev= 4.42, samples=19 00:19:15.868 lat (msec) : 20=100.00% 00:19:15.868 cpu : usr=91.76%, sys=7.66%, ctx=15, majf=0, minf=9 00:19:15.868 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.868 issued rwts: total=2337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.868 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:15.868 filename0: (groupid=0, jobs=1): err= 0: pid=74846: Sun Oct 13 11:22:55 2024 00:19:15.868 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(292MiB/10002msec) 00:19:15.868 slat (nsec): min=7271, max=62656, avg=16352.09, stdev=5766.58 00:19:15.868 clat (usec): min=11761, max=14536, avg=12801.78, stdev=521.47 00:19:15.868 lat (usec): min=11774, max=14555, avg=12818.13, stdev=521.81 00:19:15.868 clat percentiles (usec): 00:19:15.868 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12125], 20.00th=[12387], 00:19:15.868 | 30.00th=[12518], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:19:15.868 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13566], 95.00th=[13698], 00:19:15.868 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14484], 99.95th=[14484], 00:19:15.868 | 99.99th=[14484] 00:19:15.868 bw ( KiB/s): min=29184, max=30720, per=33.34%, avg=29911.58, stdev=541.84, samples=19 00:19:15.868 iops : min= 228, max= 240, avg=233.63, stdev= 4.23, samples=19 00:19:15.868 lat (msec) : 20=100.00% 00:19:15.868 cpu : usr=91.48%, sys=7.90%, ctx=11, majf=0, minf=9 00:19:15.868 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.868 issued rwts: total=2337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.868 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:15.868 filename0: (groupid=0, jobs=1): err= 0: pid=74847: Sun Oct 13 11:22:55 2024 00:19:15.868 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(292MiB/10003msec) 00:19:15.868 slat (usec): min=7, max=252, avg=16.42, stdev= 7.58 00:19:15.868 clat (usec): min=11749, max=14876, avg=12803.67, stdev=526.31 00:19:15.868 lat (usec): min=11762, max=14901, avg=12820.08, stdev=526.65 00:19:15.868 clat percentiles (usec): 00:19:15.868 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12125], 20.00th=[12387], 00:19:15.868 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12649], 60.00th=[12911], 00:19:15.868 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13566], 95.00th=[13698], 00:19:15.868 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14877], 99.95th=[14877], 00:19:15.868 | 99.99th=[14877] 00:19:15.868 bw ( KiB/s): min=29184, max=30720, per=33.34%, avg=29908.42, stdev=541.39, samples=19 00:19:15.868 iops : min= 228, max= 240, avg=233.63, stdev= 4.23, samples=19 00:19:15.868 lat (msec) : 20=100.00% 00:19:15.868 cpu : usr=90.17%, sys=8.94%, ctx=103, majf=0, minf=9 00:19:15.868 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.868 issued rwts: total=2337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.868 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:15.868 00:19:15.868 Run status group 0 (all jobs): 00:19:15.868 READ: bw=87.6MiB/s (91.9MB/s), 29.2MiB/s-29.2MiB/s (30.6MB/s-30.6MB/s), io=876MiB (919MB), run=10002-10004msec 00:19:15.868 11:22:55 -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:15.868 11:22:55 -- target/dif.sh@43 -- # local sub 00:19:15.868 11:22:55 -- target/dif.sh@45 -- # for sub in "$@" 00:19:15.868 11:22:55 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:15.868 11:22:55 -- target/dif.sh@36 -- # local sub_id=0 00:19:15.868 11:22:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:15.868 11:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.868 11:22:55 -- common/autotest_common.sh@10 -- # set +x 00:19:15.868 11:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.868 11:22:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:15.868 11:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.868 11:22:55 -- common/autotest_common.sh@10 -- # set +x 00:19:15.868 ************************************ 00:19:15.868 END TEST fio_dif_digest 00:19:15.868 ************************************ 00:19:15.868 11:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.868 00:19:15.868 real 0m10.866s 00:19:15.868 user 0m27.891s 00:19:15.868 sys 0m2.690s 00:19:15.868 11:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:15.868 11:22:55 -- common/autotest_common.sh@10 -- # set +x 00:19:15.868 11:22:55 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:15.868 11:22:55 -- target/dif.sh@147 -- # nvmftestfini 00:19:15.868 11:22:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:15.868 11:22:55 -- nvmf/common.sh@116 -- # sync 00:19:15.868 11:22:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:15.868 11:22:55 -- nvmf/common.sh@119 -- # set +e 00:19:15.868 11:22:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:15.868 11:22:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:15.868 rmmod nvme_tcp 00:19:15.868 rmmod nvme_fabrics 00:19:15.868 rmmod nvme_keyring 00:19:15.868 11:22:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:15.868 11:22:55 -- nvmf/common.sh@123 -- # set -e 00:19:15.868 11:22:55 -- nvmf/common.sh@124 -- # return 0 00:19:15.868 11:22:55 -- nvmf/common.sh@477 -- # '[' -n 74082 ']' 00:19:15.868 11:22:55 -- nvmf/common.sh@478 -- # killprocess 74082 00:19:15.868 11:22:55 -- common/autotest_common.sh@926 -- # '[' -z 74082 ']' 00:19:15.868 11:22:55 -- common/autotest_common.sh@930 -- # kill -0 74082 00:19:15.868 11:22:55 -- common/autotest_common.sh@931 -- # uname 00:19:15.868 11:22:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:15.868 11:22:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74082 00:19:15.868 killing process with pid 74082 00:19:15.868 11:22:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:15.868 11:22:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:15.868 11:22:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74082' 00:19:15.868 11:22:55 -- common/autotest_common.sh@945 -- # kill 74082 00:19:15.868 11:22:55 -- common/autotest_common.sh@950 -- # wait 74082 00:19:15.868 11:22:55 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:15.868 11:22:55 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:15.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:15.868 Waiting for block devices as requested 00:19:15.868 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:15.868 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:15.868 11:22:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:15.868 11:22:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:15.868 11:22:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:15.868 11:22:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:15.868 11:22:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.868 11:22:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:15.868 11:22:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.868 11:22:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:15.868 00:19:15.868 real 0m59.051s 00:19:15.868 user 3m46.371s 00:19:15.868 sys 0m19.711s 00:19:15.868 11:22:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:15.868 11:22:56 -- common/autotest_common.sh@10 -- # set +x 00:19:15.868 ************************************ 00:19:15.868 END TEST nvmf_dif 00:19:15.868 ************************************ 00:19:15.868 11:22:56 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:15.868 11:22:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:15.868 11:22:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:15.868 11:22:56 -- common/autotest_common.sh@10 -- # set +x 00:19:15.868 ************************************ 00:19:15.868 START TEST nvmf_abort_qd_sizes 00:19:15.868 ************************************ 00:19:15.869 11:22:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:15.869 * Looking for test storage... 00:19:15.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:15.869 11:22:56 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:15.869 11:22:56 -- nvmf/common.sh@7 -- # uname -s 00:19:15.869 11:22:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.869 11:22:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.869 11:22:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.869 11:22:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.869 11:22:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.869 11:22:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.869 11:22:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.869 11:22:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.869 11:22:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.869 11:22:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.869 11:22:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:19:15.869 11:22:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=fbe6aeb5-20f4-45b3-886a-eb976206cb47 00:19:15.869 11:22:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.869 11:22:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.869 11:22:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:15.869 11:22:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:15.869 11:22:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.869 11:22:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.869 11:22:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.869 11:22:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.869 11:22:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.869 11:22:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.869 11:22:56 -- paths/export.sh@5 -- # export PATH 00:19:15.869 11:22:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.869 11:22:56 -- nvmf/common.sh@46 -- # : 0 00:19:15.869 11:22:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:15.869 11:22:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:15.869 11:22:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:15.869 11:22:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.869 11:22:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.869 11:22:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:15.869 11:22:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:15.869 11:22:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:15.869 11:22:56 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:19:15.869 11:22:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:15.869 11:22:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.869 11:22:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:15.869 11:22:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:15.869 11:22:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:15.869 11:22:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.869 11:22:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:15.869 11:22:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.869 11:22:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:15.869 11:22:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:15.869 11:22:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:15.869 11:22:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:15.869 11:22:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:15.869 11:22:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:15.869 11:22:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.869 11:22:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:15.869 11:22:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:15.869 11:22:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:15.869 11:22:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:15.869 11:22:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:15.869 11:22:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:15.869 11:22:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.869 11:22:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:15.869 11:22:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:15.869 11:22:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:15.869 11:22:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:15.869 11:22:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:15.869 11:22:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:15.869 Cannot find device "nvmf_tgt_br" 00:19:15.869 11:22:56 -- nvmf/common.sh@154 -- # true 00:19:15.869 11:22:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.869 Cannot find device "nvmf_tgt_br2" 00:19:15.869 11:22:56 -- nvmf/common.sh@155 -- # true 00:19:15.869 11:22:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:15.869 11:22:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:15.869 Cannot find device "nvmf_tgt_br" 00:19:15.869 11:22:56 -- nvmf/common.sh@157 -- # true 00:19:15.869 11:22:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:15.869 Cannot find device "nvmf_tgt_br2" 00:19:15.869 11:22:56 -- nvmf/common.sh@158 -- # true 00:19:15.869 11:22:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:15.869 11:22:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:15.869 11:22:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.869 11:22:56 -- nvmf/common.sh@161 -- # true 00:19:15.869 11:22:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.869 11:22:56 -- nvmf/common.sh@162 -- # true 00:19:15.869 11:22:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:15.869 11:22:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:15.869 11:22:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:15.869 11:22:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:15.869 11:22:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:15.869 11:22:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:15.869 11:22:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:15.869 11:22:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:15.869 11:22:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:15.869 11:22:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:15.869 11:22:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:15.869 11:22:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:15.869 11:22:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:15.869 11:22:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:15.869 11:22:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:15.869 11:22:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:15.869 11:22:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:15.869 11:22:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:15.869 11:22:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:15.869 11:22:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:15.869 11:22:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:15.869 11:22:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:15.869 11:22:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:15.869 11:22:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:15.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:19:15.869 00:19:15.869 --- 10.0.0.2 ping statistics --- 00:19:15.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.869 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:19:15.869 11:22:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:15.869 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:15.869 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:19:15.869 00:19:15.869 --- 10.0.0.3 ping statistics --- 00:19:15.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.869 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:15.869 11:22:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:15.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:15.869 00:19:15.869 --- 10.0.0.1 ping statistics --- 00:19:15.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.869 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:15.869 11:22:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.869 11:22:56 -- nvmf/common.sh@421 -- # return 0 00:19:15.869 11:22:56 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:15.869 11:22:56 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:16.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:16.129 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:19:16.129 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:19:16.129 11:22:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.129 11:22:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:16.129 11:22:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:16.129 11:22:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.129 11:22:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:16.129 11:22:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:16.388 11:22:57 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:19:16.388 11:22:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:16.388 11:22:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:16.388 11:22:57 -- common/autotest_common.sh@10 -- # set +x 00:19:16.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.388 11:22:57 -- nvmf/common.sh@469 -- # nvmfpid=75447 00:19:16.388 11:22:57 -- nvmf/common.sh@470 -- # waitforlisten 75447 00:19:16.388 11:22:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:16.388 11:22:57 -- common/autotest_common.sh@819 -- # '[' -z 75447 ']' 00:19:16.388 11:22:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.388 11:22:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:16.388 11:22:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.388 11:22:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:16.388 11:22:57 -- common/autotest_common.sh@10 -- # set +x 00:19:16.388 [2024-10-13 11:22:57.804945] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:16.388 [2024-10-13 11:22:57.805041] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.388 [2024-10-13 11:22:57.947101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:16.647 [2024-10-13 11:22:58.017446] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:16.647 [2024-10-13 11:22:58.017621] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.647 [2024-10-13 11:22:58.017637] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.647 [2024-10-13 11:22:58.017648] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.647 [2024-10-13 11:22:58.017812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.647 [2024-10-13 11:22:58.018128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.647 [2024-10-13 11:22:58.018443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.647 [2024-10-13 11:22:58.018448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.582 11:22:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:17.582 11:22:58 -- common/autotest_common.sh@852 -- # return 0 00:19:17.582 11:22:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:17.582 11:22:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:17.582 11:22:58 -- common/autotest_common.sh@10 -- # set +x 00:19:17.582 11:22:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.582 11:22:58 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:17.582 11:22:58 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:19:17.583 11:22:58 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:19:17.583 11:22:58 -- scripts/common.sh@311 -- # local bdf bdfs 00:19:17.583 11:22:58 -- scripts/common.sh@312 -- # local nvmes 00:19:17.583 11:22:58 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:19:17.583 11:22:58 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:17.583 11:22:58 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:19:17.583 11:22:58 -- scripts/common.sh@297 -- # local bdf= 00:19:17.583 11:22:58 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:19:17.583 11:22:58 -- scripts/common.sh@232 -- # local class 00:19:17.583 11:22:58 -- scripts/common.sh@233 -- # local subclass 00:19:17.583 11:22:58 -- scripts/common.sh@234 -- # local progif 00:19:17.583 11:22:58 -- scripts/common.sh@235 -- # printf %02x 1 00:19:17.583 11:22:58 -- scripts/common.sh@235 -- # class=01 00:19:17.583 11:22:58 -- scripts/common.sh@236 -- # printf %02x 8 00:19:17.583 11:22:58 -- scripts/common.sh@236 -- # subclass=08 00:19:17.583 11:22:58 -- scripts/common.sh@237 -- # printf %02x 2 00:19:17.583 11:22:58 -- scripts/common.sh@237 -- # progif=02 00:19:17.583 11:22:58 -- scripts/common.sh@239 -- # hash lspci 00:19:17.583 11:22:58 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:19:17.583 11:22:58 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:19:17.583 11:22:58 -- scripts/common.sh@242 -- # grep -i -- -p02 00:19:17.583 11:22:58 -- scripts/common.sh@244 -- # tr -d '"' 00:19:17.583 11:22:58 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:17.583 11:22:58 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:17.583 11:22:58 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:19:17.583 11:22:58 -- scripts/common.sh@15 -- # local i 00:19:17.583 11:22:58 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:19:17.583 11:22:58 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:17.583 11:22:58 -- scripts/common.sh@24 -- # return 0 00:19:17.583 11:22:58 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:19:17.583 11:22:58 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:17.583 11:22:58 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:19:17.583 11:22:58 -- scripts/common.sh@15 -- # local i 00:19:17.583 11:22:58 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:19:17.583 11:22:58 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:17.583 11:22:58 -- scripts/common.sh@24 -- # return 0 00:19:17.583 11:22:58 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:19:17.583 11:22:58 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:17.583 11:22:58 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:19:17.583 11:22:58 -- scripts/common.sh@322 -- # uname -s 00:19:17.583 11:22:58 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:17.583 11:22:58 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:17.583 11:22:58 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:17.583 11:22:58 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:19:17.583 11:22:58 -- scripts/common.sh@322 -- # uname -s 00:19:17.583 11:22:58 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:17.583 11:22:58 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:17.583 11:22:58 -- scripts/common.sh@327 -- # (( 2 )) 00:19:17.583 11:22:58 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:19:17.583 11:22:58 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:19:17.583 11:22:58 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:19:17.583 11:22:58 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:19:17.583 11:22:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:17.583 11:22:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:17.583 11:22:58 -- common/autotest_common.sh@10 -- # set +x 00:19:17.583 ************************************ 00:19:17.583 START TEST spdk_target_abort 00:19:17.583 ************************************ 00:19:17.583 11:22:58 -- common/autotest_common.sh@1104 -- # spdk_target 00:19:17.583 11:22:58 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:17.583 11:22:58 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:17.583 11:22:58 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:19:17.583 11:22:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:17.583 11:22:58 -- common/autotest_common.sh@10 -- # set +x 00:19:17.583 spdk_targetn1 00:19:17.583 11:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:17.583 11:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:17.583 11:22:59 -- common/autotest_common.sh@10 -- # set +x 00:19:17.583 [2024-10-13 11:22:59.031166] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.583 11:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:19:17.583 11:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:17.583 11:22:59 -- common/autotest_common.sh@10 -- # set +x 00:19:17.583 11:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:19:17.583 11:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:17.583 11:22:59 -- common/autotest_common.sh@10 -- # set +x 00:19:17.583 11:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:19:17.583 11:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:17.583 11:22:59 -- common/autotest_common.sh@10 -- # set +x 00:19:17.583 [2024-10-13 11:22:59.059294] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.583 11:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:17.583 11:22:59 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:20.891 Initializing NVMe Controllers 00:19:20.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:20.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:20.892 Initialization complete. Launching workers. 00:19:20.892 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10395, failed: 0 00:19:20.892 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1084, failed to submit 9311 00:19:20.892 success 775, unsuccess 309, failed 0 00:19:20.892 11:23:02 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:20.892 11:23:02 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:24.196 Initializing NVMe Controllers 00:19:24.196 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:24.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:24.196 Initialization complete. Launching workers. 00:19:24.196 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8850, failed: 0 00:19:24.196 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1185, failed to submit 7665 00:19:24.197 success 398, unsuccess 787, failed 0 00:19:24.197 11:23:05 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:24.197 11:23:05 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:27.484 Initializing NVMe Controllers 00:19:27.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:27.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:27.484 Initialization complete. Launching workers. 00:19:27.484 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 32068, failed: 0 00:19:27.484 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2326, failed to submit 29742 00:19:27.484 success 448, unsuccess 1878, failed 0 00:19:27.484 11:23:08 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:19:27.484 11:23:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.484 11:23:08 -- common/autotest_common.sh@10 -- # set +x 00:19:27.484 11:23:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.484 11:23:08 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:19:27.484 11:23:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.484 11:23:08 -- common/autotest_common.sh@10 -- # set +x 00:19:27.743 11:23:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.743 11:23:09 -- target/abort_qd_sizes.sh@62 -- # killprocess 75447 00:19:27.743 11:23:09 -- common/autotest_common.sh@926 -- # '[' -z 75447 ']' 00:19:27.743 11:23:09 -- common/autotest_common.sh@930 -- # kill -0 75447 00:19:27.743 11:23:09 -- common/autotest_common.sh@931 -- # uname 00:19:27.743 11:23:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:27.743 11:23:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75447 00:19:27.743 killing process with pid 75447 00:19:27.743 11:23:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:27.743 11:23:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:27.743 11:23:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75447' 00:19:27.743 11:23:09 -- common/autotest_common.sh@945 -- # kill 75447 00:19:27.743 11:23:09 -- common/autotest_common.sh@950 -- # wait 75447 00:19:27.743 00:19:27.743 real 0m10.384s 00:19:27.743 user 0m42.789s 00:19:27.743 sys 0m1.998s 00:19:27.743 11:23:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.743 11:23:09 -- common/autotest_common.sh@10 -- # set +x 00:19:27.743 ************************************ 00:19:27.743 END TEST spdk_target_abort 00:19:27.743 ************************************ 00:19:28.002 11:23:09 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:19:28.002 11:23:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:28.002 11:23:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:28.002 11:23:09 -- common/autotest_common.sh@10 -- # set +x 00:19:28.002 ************************************ 00:19:28.002 START TEST kernel_target_abort 00:19:28.002 ************************************ 00:19:28.002 11:23:09 -- common/autotest_common.sh@1104 -- # kernel_target 00:19:28.002 11:23:09 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:19:28.002 11:23:09 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:19:28.002 11:23:09 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:19:28.002 11:23:09 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:19:28.002 11:23:09 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:19:28.002 11:23:09 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:28.002 11:23:09 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:28.002 11:23:09 -- nvmf/common.sh@627 -- # local block nvme 00:19:28.002 11:23:09 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:19:28.002 11:23:09 -- nvmf/common.sh@630 -- # modprobe nvmet 00:19:28.002 11:23:09 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:28.002 11:23:09 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:28.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:28.261 Waiting for block devices as requested 00:19:28.261 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:28.520 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:28.520 11:23:09 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:28.520 11:23:09 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:28.520 11:23:09 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:19:28.520 11:23:09 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:19:28.520 11:23:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:28.520 No valid GPT data, bailing 00:19:28.520 11:23:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:28.520 11:23:10 -- scripts/common.sh@393 -- # pt= 00:19:28.520 11:23:10 -- scripts/common.sh@394 -- # return 1 00:19:28.520 11:23:10 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:19:28.520 11:23:10 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:28.520 11:23:10 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:28.520 11:23:10 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:19:28.520 11:23:10 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:19:28.520 11:23:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:28.520 No valid GPT data, bailing 00:19:28.520 11:23:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:28.520 11:23:10 -- scripts/common.sh@393 -- # pt= 00:19:28.520 11:23:10 -- scripts/common.sh@394 -- # return 1 00:19:28.520 11:23:10 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:19:28.520 11:23:10 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:28.520 11:23:10 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:19:28.520 11:23:10 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:19:28.520 11:23:10 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:19:28.520 11:23:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:19:28.779 No valid GPT data, bailing 00:19:28.779 11:23:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:19:28.779 11:23:10 -- scripts/common.sh@393 -- # pt= 00:19:28.779 11:23:10 -- scripts/common.sh@394 -- # return 1 00:19:28.779 11:23:10 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:19:28.779 11:23:10 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:28.779 11:23:10 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:19:28.779 11:23:10 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:19:28.779 11:23:10 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:19:28.779 11:23:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:19:28.779 No valid GPT data, bailing 00:19:28.779 11:23:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:19:28.779 11:23:10 -- scripts/common.sh@393 -- # pt= 00:19:28.779 11:23:10 -- scripts/common.sh@394 -- # return 1 00:19:28.779 11:23:10 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:19:28.779 11:23:10 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:19:28.779 11:23:10 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:19:28.779 11:23:10 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:28.779 11:23:10 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:28.779 11:23:10 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:19:28.779 11:23:10 -- nvmf/common.sh@654 -- # echo 1 00:19:28.779 11:23:10 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:19:28.779 11:23:10 -- nvmf/common.sh@656 -- # echo 1 00:19:28.779 11:23:10 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:19:28.779 11:23:10 -- nvmf/common.sh@663 -- # echo tcp 00:19:28.779 11:23:10 -- nvmf/common.sh@664 -- # echo 4420 00:19:28.779 11:23:10 -- nvmf/common.sh@665 -- # echo ipv4 00:19:28.779 11:23:10 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:28.779 11:23:10 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:fbe6aeb5-20f4-45b3-886a-eb976206cb47 --hostid=fbe6aeb5-20f4-45b3-886a-eb976206cb47 -a 10.0.0.1 -t tcp -s 4420 00:19:28.779 00:19:28.779 Discovery Log Number of Records 2, Generation counter 2 00:19:28.779 =====Discovery Log Entry 0====== 00:19:28.779 trtype: tcp 00:19:28.779 adrfam: ipv4 00:19:28.779 subtype: current discovery subsystem 00:19:28.779 treq: not specified, sq flow control disable supported 00:19:28.779 portid: 1 00:19:28.779 trsvcid: 4420 00:19:28.779 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:28.779 traddr: 10.0.0.1 00:19:28.779 eflags: none 00:19:28.779 sectype: none 00:19:28.779 =====Discovery Log Entry 1====== 00:19:28.779 trtype: tcp 00:19:28.779 adrfam: ipv4 00:19:28.779 subtype: nvme subsystem 00:19:28.779 treq: not specified, sq flow control disable supported 00:19:28.779 portid: 1 00:19:28.779 trsvcid: 4420 00:19:28.779 subnqn: kernel_target 00:19:28.779 traddr: 10.0.0.1 00:19:28.779 eflags: none 00:19:28.779 sectype: none 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:28.779 11:23:10 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:32.087 Initializing NVMe Controllers 00:19:32.087 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:32.087 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:32.087 Initialization complete. Launching workers. 00:19:32.087 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31858, failed: 0 00:19:32.087 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31858, failed to submit 0 00:19:32.087 success 0, unsuccess 31858, failed 0 00:19:32.087 11:23:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:32.087 11:23:13 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:35.373 Initializing NVMe Controllers 00:19:35.373 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:35.373 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:35.373 Initialization complete. Launching workers. 00:19:35.374 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 65981, failed: 0 00:19:35.374 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27639, failed to submit 38342 00:19:35.374 success 0, unsuccess 27639, failed 0 00:19:35.374 11:23:16 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:35.374 11:23:16 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:38.667 Initializing NVMe Controllers 00:19:38.667 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:38.667 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:38.667 Initialization complete. Launching workers. 00:19:38.667 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 76339, failed: 0 00:19:38.667 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19058, failed to submit 57281 00:19:38.667 success 0, unsuccess 19058, failed 0 00:19:38.667 11:23:19 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:19:38.667 11:23:19 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:19:38.667 11:23:19 -- nvmf/common.sh@677 -- # echo 0 00:19:38.667 11:23:19 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:19:38.667 11:23:19 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:38.667 11:23:19 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:38.667 11:23:19 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:19:38.667 11:23:19 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:19:38.667 11:23:19 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:19:38.667 ************************************ 00:19:38.667 END TEST kernel_target_abort 00:19:38.667 ************************************ 00:19:38.667 00:19:38.667 real 0m10.469s 00:19:38.667 user 0m5.571s 00:19:38.667 sys 0m2.373s 00:19:38.667 11:23:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.667 11:23:19 -- common/autotest_common.sh@10 -- # set +x 00:19:38.667 11:23:19 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:19:38.667 11:23:19 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:19:38.667 11:23:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:38.667 11:23:19 -- nvmf/common.sh@116 -- # sync 00:19:38.667 11:23:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:38.667 11:23:19 -- nvmf/common.sh@119 -- # set +e 00:19:38.667 11:23:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:38.667 11:23:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:38.667 rmmod nvme_tcp 00:19:38.667 rmmod nvme_fabrics 00:19:38.667 rmmod nvme_keyring 00:19:38.667 11:23:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:38.667 11:23:19 -- nvmf/common.sh@123 -- # set -e 00:19:38.667 11:23:19 -- nvmf/common.sh@124 -- # return 0 00:19:38.667 11:23:19 -- nvmf/common.sh@477 -- # '[' -n 75447 ']' 00:19:38.667 11:23:19 -- nvmf/common.sh@478 -- # killprocess 75447 00:19:38.667 11:23:19 -- common/autotest_common.sh@926 -- # '[' -z 75447 ']' 00:19:38.667 11:23:19 -- common/autotest_common.sh@930 -- # kill -0 75447 00:19:38.667 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (75447) - No such process 00:19:38.667 Process with pid 75447 is not found 00:19:38.667 11:23:19 -- common/autotest_common.sh@953 -- # echo 'Process with pid 75447 is not found' 00:19:38.667 11:23:19 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:38.667 11:23:19 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:39.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:39.235 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:19:39.235 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:19:39.235 11:23:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:39.235 11:23:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:39.235 11:23:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.235 11:23:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:39.235 11:23:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.235 11:23:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:39.235 11:23:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.235 11:23:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:39.235 ************************************ 00:19:39.235 END TEST nvmf_abort_qd_sizes 00:19:39.235 ************************************ 00:19:39.235 00:19:39.235 real 0m24.279s 00:19:39.235 user 0m49.795s 00:19:39.235 sys 0m5.575s 00:19:39.235 11:23:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.235 11:23:20 -- common/autotest_common.sh@10 -- # set +x 00:19:39.235 11:23:20 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:39.235 11:23:20 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:39.235 11:23:20 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:39.235 11:23:20 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:39.235 11:23:20 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:39.235 11:23:20 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:39.235 11:23:20 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:39.235 11:23:20 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:39.235 11:23:20 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:39.235 11:23:20 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:39.235 11:23:20 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:39.235 11:23:20 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:39.235 11:23:20 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:39.235 11:23:20 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:39.235 11:23:20 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:19:39.235 11:23:20 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:19:39.235 11:23:20 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:19:39.235 11:23:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:39.235 11:23:20 -- common/autotest_common.sh@10 -- # set +x 00:19:39.235 11:23:20 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:19:39.235 11:23:20 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:19:39.235 11:23:20 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:19:39.235 11:23:20 -- common/autotest_common.sh@10 -- # set +x 00:19:41.138 INFO: APP EXITING 00:19:41.138 INFO: killing all VMs 00:19:41.138 INFO: killing vhost app 00:19:41.138 INFO: EXIT DONE 00:19:41.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:41.704 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:19:41.704 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:19:42.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:42.531 Cleaning 00:19:42.531 Removing: /var/run/dpdk/spdk0/config 00:19:42.531 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:42.531 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:42.531 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:42.531 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:42.531 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:42.531 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:42.531 Removing: /var/run/dpdk/spdk1/config 00:19:42.531 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:19:42.531 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:19:42.531 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:19:42.531 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:19:42.531 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:19:42.531 Removing: /var/run/dpdk/spdk1/hugepage_info 00:19:42.531 Removing: /var/run/dpdk/spdk2/config 00:19:42.531 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:19:42.531 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:19:42.531 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:19:42.531 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:19:42.531 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:19:42.531 Removing: /var/run/dpdk/spdk2/hugepage_info 00:19:42.531 Removing: /var/run/dpdk/spdk3/config 00:19:42.531 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:19:42.531 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:19:42.531 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:19:42.531 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:19:42.531 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:19:42.531 Removing: /var/run/dpdk/spdk3/hugepage_info 00:19:42.531 Removing: /var/run/dpdk/spdk4/config 00:19:42.531 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:19:42.531 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:19:42.531 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:19:42.531 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:19:42.531 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:19:42.531 Removing: /var/run/dpdk/spdk4/hugepage_info 00:19:42.531 Removing: /dev/shm/nvmf_trace.0 00:19:42.531 Removing: /dev/shm/spdk_tgt_trace.pid53848 00:19:42.531 Removing: /var/run/dpdk/spdk0 00:19:42.531 Removing: /var/run/dpdk/spdk1 00:19:42.531 Removing: /var/run/dpdk/spdk2 00:19:42.532 Removing: /var/run/dpdk/spdk3 00:19:42.532 Removing: /var/run/dpdk/spdk4 00:19:42.532 Removing: /var/run/dpdk/spdk_pid53704 00:19:42.532 Removing: /var/run/dpdk/spdk_pid53848 00:19:42.532 Removing: /var/run/dpdk/spdk_pid54079 00:19:42.532 Removing: /var/run/dpdk/spdk_pid54270 00:19:42.532 Removing: /var/run/dpdk/spdk_pid54415 00:19:42.532 Removing: /var/run/dpdk/spdk_pid54473 00:19:42.532 Removing: /var/run/dpdk/spdk_pid54548 00:19:42.532 Removing: /var/run/dpdk/spdk_pid54638 00:19:42.532 Removing: /var/run/dpdk/spdk_pid54703 00:19:42.532 Removing: /var/run/dpdk/spdk_pid54747 00:19:42.532 Removing: /var/run/dpdk/spdk_pid54777 00:19:42.532 Removing: /var/run/dpdk/spdk_pid54838 00:19:42.532 Removing: /var/run/dpdk/spdk_pid54937 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55369 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55415 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55461 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55477 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55544 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55560 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55627 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55643 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55683 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55701 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55741 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55759 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55875 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55916 00:19:42.532 Removing: /var/run/dpdk/spdk_pid55984 00:19:42.532 Removing: /var/run/dpdk/spdk_pid56036 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56060 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56119 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56138 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56167 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56187 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56221 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56235 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56270 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56289 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56324 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56338 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56372 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56392 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56421 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56440 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56475 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56489 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56523 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56543 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56573 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56598 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56627 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56641 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56681 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56695 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56724 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56744 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56778 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56792 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56827 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56846 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56875 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56895 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56929 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56952 00:19:42.791 Removing: /var/run/dpdk/spdk_pid56984 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57008 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57046 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57060 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57094 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57114 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57144 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57213 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57292 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57602 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57614 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57645 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57652 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57671 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57689 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57696 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57715 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57733 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57740 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57759 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57777 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57790 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57805 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57823 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57831 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57850 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57868 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57881 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57894 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57924 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57936 00:19:42.791 Removing: /var/run/dpdk/spdk_pid57968 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58026 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58052 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58062 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58090 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58100 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58106 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58144 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58154 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58186 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58188 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58195 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58203 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58210 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58218 00:19:42.791 Removing: /var/run/dpdk/spdk_pid58225 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58233 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58259 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58286 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58294 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58324 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58328 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58341 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58376 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58393 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58414 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58427 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58429 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58442 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58444 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58450 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58459 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58461 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58534 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58576 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58674 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58700 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58744 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58764 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58779 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58793 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58823 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58843 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58905 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58919 00:19:43.051 Removing: /var/run/dpdk/spdk_pid58957 00:19:43.051 Removing: /var/run/dpdk/spdk_pid59047 00:19:43.051 Removing: /var/run/dpdk/spdk_pid59094 00:19:43.051 Removing: /var/run/dpdk/spdk_pid59120 00:19:43.051 Removing: /var/run/dpdk/spdk_pid59205 00:19:43.051 Removing: /var/run/dpdk/spdk_pid59251 00:19:43.051 Removing: /var/run/dpdk/spdk_pid59277 00:19:43.051 Removing: /var/run/dpdk/spdk_pid59498 00:19:43.051 Removing: /var/run/dpdk/spdk_pid59590 00:19:43.051 Removing: /var/run/dpdk/spdk_pid59618 00:19:43.051 Removing: /var/run/dpdk/spdk_pid59933 00:19:43.051 Removing: /var/run/dpdk/spdk_pid59971 00:19:43.051 Removing: /var/run/dpdk/spdk_pid60269 00:19:43.052 Removing: /var/run/dpdk/spdk_pid60682 00:19:43.052 Removing: /var/run/dpdk/spdk_pid60941 00:19:43.052 Removing: /var/run/dpdk/spdk_pid61681 00:19:43.052 Removing: /var/run/dpdk/spdk_pid62495 00:19:43.052 Removing: /var/run/dpdk/spdk_pid62618 00:19:43.052 Removing: /var/run/dpdk/spdk_pid62680 00:19:43.052 Removing: /var/run/dpdk/spdk_pid63932 00:19:43.052 Removing: /var/run/dpdk/spdk_pid64146 00:19:43.052 Removing: /var/run/dpdk/spdk_pid64445 00:19:43.052 Removing: /var/run/dpdk/spdk_pid64555 00:19:43.052 Removing: /var/run/dpdk/spdk_pid64688 00:19:43.052 Removing: /var/run/dpdk/spdk_pid64716 00:19:43.052 Removing: /var/run/dpdk/spdk_pid64743 00:19:43.052 Removing: /var/run/dpdk/spdk_pid64771 00:19:43.052 Removing: /var/run/dpdk/spdk_pid64868 00:19:43.052 Removing: /var/run/dpdk/spdk_pid64997 00:19:43.052 Removing: /var/run/dpdk/spdk_pid65139 00:19:43.052 Removing: /var/run/dpdk/spdk_pid65214 00:19:43.052 Removing: /var/run/dpdk/spdk_pid65603 00:19:43.052 Removing: /var/run/dpdk/spdk_pid65950 00:19:43.052 Removing: /var/run/dpdk/spdk_pid65958 00:19:43.052 Removing: /var/run/dpdk/spdk_pid68141 00:19:43.052 Removing: /var/run/dpdk/spdk_pid68153 00:19:43.052 Removing: /var/run/dpdk/spdk_pid68422 00:19:43.052 Removing: /var/run/dpdk/spdk_pid68442 00:19:43.052 Removing: /var/run/dpdk/spdk_pid68456 00:19:43.052 Removing: /var/run/dpdk/spdk_pid68482 00:19:43.052 Removing: /var/run/dpdk/spdk_pid68499 00:19:43.052 Removing: /var/run/dpdk/spdk_pid68579 00:19:43.052 Removing: /var/run/dpdk/spdk_pid68586 00:19:43.052 Removing: /var/run/dpdk/spdk_pid68694 00:19:43.052 Removing: /var/run/dpdk/spdk_pid68702 00:19:43.052 Removing: /var/run/dpdk/spdk_pid68810 00:19:43.052 Removing: /var/run/dpdk/spdk_pid68812 00:19:43.052 Removing: /var/run/dpdk/spdk_pid69215 00:19:43.052 Removing: /var/run/dpdk/spdk_pid69264 00:19:43.052 Removing: /var/run/dpdk/spdk_pid69367 00:19:43.052 Removing: /var/run/dpdk/spdk_pid69452 00:19:43.052 Removing: /var/run/dpdk/spdk_pid69755 00:19:43.052 Removing: /var/run/dpdk/spdk_pid69951 00:19:43.052 Removing: /var/run/dpdk/spdk_pid70330 00:19:43.311 Removing: /var/run/dpdk/spdk_pid70862 00:19:43.311 Removing: /var/run/dpdk/spdk_pid71288 00:19:43.311 Removing: /var/run/dpdk/spdk_pid71341 00:19:43.311 Removing: /var/run/dpdk/spdk_pid71388 00:19:43.311 Removing: /var/run/dpdk/spdk_pid71443 00:19:43.311 Removing: /var/run/dpdk/spdk_pid71538 00:19:43.311 Removing: /var/run/dpdk/spdk_pid71585 00:19:43.311 Removing: /var/run/dpdk/spdk_pid71645 00:19:43.311 Removing: /var/run/dpdk/spdk_pid71700 00:19:43.311 Removing: /var/run/dpdk/spdk_pid72029 00:19:43.311 Removing: /var/run/dpdk/spdk_pid73202 00:19:43.311 Removing: /var/run/dpdk/spdk_pid73343 00:19:43.311 Removing: /var/run/dpdk/spdk_pid73585 00:19:43.311 Removing: /var/run/dpdk/spdk_pid74139 00:19:43.311 Removing: /var/run/dpdk/spdk_pid74299 00:19:43.311 Removing: /var/run/dpdk/spdk_pid74461 00:19:43.311 Removing: /var/run/dpdk/spdk_pid74558 00:19:43.311 Removing: /var/run/dpdk/spdk_pid74725 00:19:43.311 Removing: /var/run/dpdk/spdk_pid74835 00:19:43.311 Removing: /var/run/dpdk/spdk_pid75498 00:19:43.311 Removing: /var/run/dpdk/spdk_pid75534 00:19:43.311 Removing: /var/run/dpdk/spdk_pid75569 00:19:43.311 Removing: /var/run/dpdk/spdk_pid75813 00:19:43.311 Removing: /var/run/dpdk/spdk_pid75849 00:19:43.311 Removing: /var/run/dpdk/spdk_pid75888 00:19:43.311 Clean 00:19:43.311 killing process with pid 48032 00:19:43.311 killing process with pid 48034 00:19:43.311 11:23:24 -- common/autotest_common.sh@1436 -- # return 0 00:19:43.311 11:23:24 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:19:43.311 11:23:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:43.311 11:23:24 -- common/autotest_common.sh@10 -- # set +x 00:19:43.311 11:23:24 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:19:43.311 11:23:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:43.311 11:23:24 -- common/autotest_common.sh@10 -- # set +x 00:19:43.570 11:23:24 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:43.570 11:23:24 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:43.570 11:23:24 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:43.570 11:23:24 -- spdk/autotest.sh@394 -- # hash lcov 00:19:43.570 11:23:24 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:19:43.570 11:23:24 -- spdk/autotest.sh@396 -- # hostname 00:19:43.570 11:23:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:43.829 geninfo: WARNING: invalid characters removed from testname! 00:20:10.372 11:23:47 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:10.372 11:23:51 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:12.899 11:23:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:14.839 11:23:56 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:17.380 11:23:58 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:19.963 11:24:01 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:22.496 11:24:03 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:22.496 11:24:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:22.496 11:24:03 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:22.496 11:24:03 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.496 11:24:03 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.497 11:24:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.497 11:24:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.497 11:24:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.497 11:24:03 -- paths/export.sh@5 -- $ export PATH 00:20:22.497 11:24:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.497 11:24:03 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:22.497 11:24:03 -- common/autobuild_common.sh@440 -- $ date +%s 00:20:22.497 11:24:03 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1728818643.XXXXXX 00:20:22.497 11:24:03 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1728818643.BjDSid 00:20:22.497 11:24:03 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:20:22.497 11:24:03 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:20:22.497 11:24:03 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:22.497 11:24:03 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:22.497 11:24:03 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:22.497 11:24:03 -- common/autobuild_common.sh@456 -- $ get_config_params 00:20:22.497 11:24:03 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:20:22.497 11:24:03 -- common/autotest_common.sh@10 -- $ set +x 00:20:22.497 11:24:03 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:20:22.497 11:24:03 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:20:22.497 11:24:03 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:22.497 11:24:03 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:20:22.497 11:24:03 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:20:22.497 11:24:03 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:20:22.497 11:24:03 -- spdk/autopackage.sh@19 -- $ timing_finish 00:20:22.497 11:24:03 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:22.497 11:24:03 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:20:22.497 11:24:03 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:22.497 11:24:03 -- spdk/autopackage.sh@20 -- $ exit 0 00:20:22.497 + [[ -n 5234 ]] 00:20:22.497 + sudo kill 5234 00:20:22.764 [Pipeline] } 00:20:22.780 [Pipeline] // timeout 00:20:22.786 [Pipeline] } 00:20:22.800 [Pipeline] // stage 00:20:22.806 [Pipeline] } 00:20:22.820 [Pipeline] // catchError 00:20:22.829 [Pipeline] stage 00:20:22.831 [Pipeline] { (Stop VM) 00:20:22.843 [Pipeline] sh 00:20:23.124 + vagrant halt 00:20:26.472 ==> default: Halting domain... 00:20:33.051 [Pipeline] sh 00:20:33.332 + vagrant destroy -f 00:20:36.631 ==> default: Removing domain... 00:20:36.672 [Pipeline] sh 00:20:36.957 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:20:36.967 [Pipeline] } 00:20:36.981 [Pipeline] // stage 00:20:36.986 [Pipeline] } 00:20:36.999 [Pipeline] // dir 00:20:37.004 [Pipeline] } 00:20:37.019 [Pipeline] // wrap 00:20:37.025 [Pipeline] } 00:20:37.038 [Pipeline] // catchError 00:20:37.045 [Pipeline] stage 00:20:37.047 [Pipeline] { (Epilogue) 00:20:37.058 [Pipeline] sh 00:20:37.339 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:42.647 [Pipeline] catchError 00:20:42.649 [Pipeline] { 00:20:42.661 [Pipeline] sh 00:20:42.966 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:42.966 Artifacts sizes are good 00:20:42.975 [Pipeline] } 00:20:42.988 [Pipeline] // catchError 00:20:42.999 [Pipeline] archiveArtifacts 00:20:43.006 Archiving artifacts 00:20:43.124 [Pipeline] cleanWs 00:20:43.136 [WS-CLEANUP] Deleting project workspace... 00:20:43.136 [WS-CLEANUP] Deferred wipeout is used... 00:20:43.142 [WS-CLEANUP] done 00:20:43.144 [Pipeline] } 00:20:43.159 [Pipeline] // stage 00:20:43.164 [Pipeline] } 00:20:43.178 [Pipeline] // node 00:20:43.183 [Pipeline] End of Pipeline 00:20:43.223 Finished: SUCCESS